Jan 28 18:13:07 crc systemd[1]: Starting Kubernetes Kubelet... Jan 28 18:13:07 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:07 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 28 18:13:08 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 28 18:13:10 crc kubenswrapper[4985]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.529587 4985 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537719 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537762 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537768 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537776 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537783 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537792 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537801 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537807 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537813 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537819 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537825 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537831 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537837 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537843 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537848 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537854 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537860 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537866 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537872 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537878 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537883 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537889 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537895 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537901 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537907 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537912 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537918 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537924 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537929 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537944 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537950 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537956 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537962 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537968 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537974 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537980 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537986 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537992 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.537998 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538005 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538012 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538020 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.538028 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543003 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543025 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543035 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543041 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543049 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543055 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543062 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543068 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543075 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543081 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543087 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543094 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543101 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543108 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543114 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543121 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543129 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543135 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543142 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543148 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543154 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543162 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543171 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543177 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543184 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543191 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543198 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.543204 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585898 4985 flags.go:64] FLAG: --address="0.0.0.0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585927 4985 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585943 4985 flags.go:64] FLAG: --anonymous-auth="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585960 4985 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585970 4985 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585977 4985 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585987 4985 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.585996 4985 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586004 4985 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586012 4985 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586020 4985 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586028 4985 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586035 4985 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586044 4985 flags.go:64] FLAG: --cgroup-root="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586050 4985 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586057 4985 flags.go:64] FLAG: --client-ca-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586063 4985 flags.go:64] FLAG: --cloud-config="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586070 4985 flags.go:64] FLAG: --cloud-provider="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586076 4985 flags.go:64] FLAG: --cluster-dns="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586085 4985 flags.go:64] FLAG: --cluster-domain="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586091 4985 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586098 4985 flags.go:64] FLAG: --config-dir="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586104 4985 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586112 4985 flags.go:64] FLAG: --container-log-max-files="5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586121 4985 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586127 4985 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586135 4985 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586142 4985 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586148 4985 flags.go:64] FLAG: --contention-profiling="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586155 4985 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586161 4985 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586168 4985 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586174 4985 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586183 4985 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586190 4985 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586197 4985 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586203 4985 flags.go:64] FLAG: --enable-load-reader="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586210 4985 flags.go:64] FLAG: --enable-server="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586217 4985 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586231 4985 flags.go:64] FLAG: --event-burst="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586237 4985 flags.go:64] FLAG: --event-qps="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586244 4985 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586276 4985 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586285 4985 flags.go:64] FLAG: --eviction-hard="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586295 4985 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586302 4985 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586308 4985 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586316 4985 flags.go:64] FLAG: --eviction-soft="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586323 4985 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586329 4985 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586336 4985 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586343 4985 flags.go:64] FLAG: --experimental-mounter-path="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586349 4985 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586356 4985 flags.go:64] FLAG: --fail-swap-on="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586362 4985 flags.go:64] FLAG: --feature-gates="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586370 4985 flags.go:64] FLAG: --file-check-frequency="20s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586377 4985 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586384 4985 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586391 4985 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586397 4985 flags.go:64] FLAG: --healthz-port="10248" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586404 4985 flags.go:64] FLAG: --help="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586412 4985 flags.go:64] FLAG: --hostname-override="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586418 4985 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586425 4985 flags.go:64] FLAG: --http-check-frequency="20s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586431 4985 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586438 4985 flags.go:64] FLAG: --image-credential-provider-config="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586444 4985 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586451 4985 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586457 4985 flags.go:64] FLAG: --image-service-endpoint="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586464 4985 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586470 4985 flags.go:64] FLAG: --kube-api-burst="100" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586476 4985 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586483 4985 flags.go:64] FLAG: --kube-api-qps="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586490 4985 flags.go:64] FLAG: --kube-reserved="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586497 4985 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586504 4985 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586512 4985 flags.go:64] FLAG: --kubelet-cgroups="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586518 4985 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586526 4985 flags.go:64] FLAG: --lock-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586532 4985 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586538 4985 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586545 4985 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586555 4985 flags.go:64] FLAG: --log-json-split-stream="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586563 4985 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586569 4985 flags.go:64] FLAG: --log-text-split-stream="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586575 4985 flags.go:64] FLAG: --logging-format="text" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586582 4985 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586589 4985 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586595 4985 flags.go:64] FLAG: --manifest-url="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586629 4985 flags.go:64] FLAG: --manifest-url-header="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586638 4985 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586645 4985 flags.go:64] FLAG: --max-open-files="1000000" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586653 4985 flags.go:64] FLAG: --max-pods="110" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586660 4985 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586666 4985 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586673 4985 flags.go:64] FLAG: --memory-manager-policy="None" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586679 4985 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586686 4985 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586692 4985 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586699 4985 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586716 4985 flags.go:64] FLAG: --node-status-max-images="50" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586722 4985 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586728 4985 flags.go:64] FLAG: --oom-score-adj="-999" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586735 4985 flags.go:64] FLAG: --pod-cidr="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586741 4985 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586751 4985 flags.go:64] FLAG: --pod-manifest-path="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586757 4985 flags.go:64] FLAG: --pod-max-pids="-1" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586765 4985 flags.go:64] FLAG: --pods-per-core="0" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586771 4985 flags.go:64] FLAG: --port="10250" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586777 4985 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586783 4985 flags.go:64] FLAG: --provider-id="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586790 4985 flags.go:64] FLAG: --qos-reserved="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586796 4985 flags.go:64] FLAG: --read-only-port="10255" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586803 4985 flags.go:64] FLAG: --register-node="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586809 4985 flags.go:64] FLAG: --register-schedulable="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586815 4985 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586828 4985 flags.go:64] FLAG: --registry-burst="10" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586834 4985 flags.go:64] FLAG: --registry-qps="5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586841 4985 flags.go:64] FLAG: --reserved-cpus="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586848 4985 flags.go:64] FLAG: --reserved-memory="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586856 4985 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586863 4985 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586869 4985 flags.go:64] FLAG: --rotate-certificates="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586875 4985 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586882 4985 flags.go:64] FLAG: --runonce="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586889 4985 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586896 4985 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586903 4985 flags.go:64] FLAG: --seccomp-default="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586910 4985 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586916 4985 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586923 4985 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586930 4985 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586938 4985 flags.go:64] FLAG: --storage-driver-password="root" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586944 4985 flags.go:64] FLAG: --storage-driver-secure="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586950 4985 flags.go:64] FLAG: --storage-driver-table="stats" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586957 4985 flags.go:64] FLAG: --storage-driver-user="root" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586963 4985 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586970 4985 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586977 4985 flags.go:64] FLAG: --system-cgroups="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586983 4985 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.586995 4985 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587001 4985 flags.go:64] FLAG: --tls-cert-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587007 4985 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587016 4985 flags.go:64] FLAG: --tls-min-version="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587022 4985 flags.go:64] FLAG: --tls-private-key-file="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587028 4985 flags.go:64] FLAG: --topology-manager-policy="none" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587035 4985 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587042 4985 flags.go:64] FLAG: --topology-manager-scope="container" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587050 4985 flags.go:64] FLAG: --v="2" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587061 4985 flags.go:64] FLAG: --version="false" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587073 4985 flags.go:64] FLAG: --vmodule="" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587083 4985 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587092 4985 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587271 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587279 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587286 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587293 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587300 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587308 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587314 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587320 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587326 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587331 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587336 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587342 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587348 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587353 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587361 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587366 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587371 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587376 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587382 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587387 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587392 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587397 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587403 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587408 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587413 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587418 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587423 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587428 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587435 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587442 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587447 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587454 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587460 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587466 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587471 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587477 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587483 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587490 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587496 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587501 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587507 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587512 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587517 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587522 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587528 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587533 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587538 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587544 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587549 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587554 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587560 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587565 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587570 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587576 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587581 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587586 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587591 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587596 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587601 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587606 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587611 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587617 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587624 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587631 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587636 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587642 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587647 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587652 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587658 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587664 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.587669 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.587680 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.684502 4985 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.684546 4985 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684628 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684637 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684643 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684649 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684655 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684660 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684666 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684672 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684677 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684683 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684688 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684698 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684706 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684714 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684722 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684730 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684738 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684744 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684750 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684756 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684761 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684767 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684772 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684778 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684783 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684788 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684793 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684798 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684803 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684810 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684816 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684821 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684826 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684830 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684835 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684840 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684845 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684850 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684855 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684861 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684868 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684873 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684878 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684883 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684888 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684893 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684898 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684905 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684911 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684916 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684922 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684926 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684932 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684937 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684942 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684947 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684952 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684957 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684962 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684966 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684971 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684976 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684981 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684985 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684990 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.684996 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685001 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685005 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685010 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685015 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685020 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685029 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685211 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685218 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685224 4985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685230 4985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685235 4985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685240 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685244 4985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685264 4985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685270 4985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685276 4985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685283 4985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685289 4985 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685294 4985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685299 4985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685304 4985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685309 4985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685314 4985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685318 4985 feature_gate.go:330] unrecognized feature gate: Example Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685323 4985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685328 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685333 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685338 4985 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685343 4985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685348 4985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685354 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685358 4985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685363 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685368 4985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685373 4985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685379 4985 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685384 4985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685389 4985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685394 4985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685399 4985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685403 4985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685409 4985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685414 4985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685419 4985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685424 4985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685428 4985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685433 4985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685438 4985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685443 4985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685448 4985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685453 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685459 4985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685466 4985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685471 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685477 4985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685482 4985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685487 4985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685494 4985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685501 4985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685508 4985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685513 4985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685518 4985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685524 4985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685529 4985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685534 4985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685540 4985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685545 4985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685550 4985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685555 4985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685560 4985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685565 4985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685572 4985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685577 4985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685582 4985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685587 4985 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685592 4985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 28 18:13:10 crc kubenswrapper[4985]: W0128 18:13:10.685596 4985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685604 4985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.685831 4985 server.go:940] "Client rotation is on, will bootstrap in background" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.693083 4985 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.693183 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.695237 4985 server.go:997] "Starting client certificate rotation" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.695281 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.696541 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-03 02:42:34.243482987 +0000 UTC Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.696735 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.826922 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.830375 4985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:13:10 crc kubenswrapper[4985]: E0128 18:13:10.833556 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:10 crc kubenswrapper[4985]: I0128 18:13:10.856359 4985 log.go:25] "Validated CRI v1 runtime API" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.024483 4985 log.go:25] "Validated CRI v1 image API" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.026448 4985 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.040674 4985 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-28-18-07-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.040714 4985 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:42 fsType:tmpfs blockSize:0}] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.062786 4985 manager.go:217] Machine: {Timestamp:2026-01-28 18:13:11.059922838 +0000 UTC m=+1.886485699 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:a73758a0-c5e5-4e2e-bacd-4099da9969a4 BootID:ef51598b-c07a-479e-807b-3fca14f8607d Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:42 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:d9:ec:ca Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:d9:ec:ca Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:1f:d8:b1 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:16:1d:3d Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:ec:ce:8e Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:3f:88:71 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:82:3c:5c:b0:d7:ac Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:76:bd:68:fe:f8:02 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063083 4985 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063267 4985 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063600 4985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063839 4985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.063874 4985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.064110 4985 topology_manager.go:138] "Creating topology manager with none policy" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.064124 4985 container_manager_linux.go:303] "Creating device plugin manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.080709 4985 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.080748 4985 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.107865 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.108155 4985 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.113945 4985 kubelet.go:418] "Attempting to sync node with API server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.113981 4985 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114075 4985 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114094 4985 kubelet.go:324] "Adding apiserver pod source" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.114112 4985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.121427 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.121560 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.121613 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.121746 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.123128 4985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.125037 4985 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.126546 4985 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132669 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132694 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132702 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132710 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132722 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132732 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132741 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132753 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132763 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132773 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132784 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.132792 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.145779 4985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146402 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146622 4985 server.go:1280] "Started kubelet" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.146844 4985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.147781 4985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 28 18:13:11 crc systemd[1]: Started Kubernetes Kubelet. Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.148870 4985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.180852 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.180926 4985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.181529 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:27:05.117889909 +0000 UTC Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182180 4985 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182226 4985 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.182426 4985 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183381 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.183576 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183713 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.183861 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186530 4985 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186551 4985 factory.go:55] Registering systemd factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.186560 4985 factory.go:221] Registration of the systemd container factory successfully Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189514 4985 factory.go:153] Registering CRI-O factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189562 4985 factory.go:221] Registration of the crio container factory successfully Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189596 4985 factory.go:103] Registering Raw factory Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.189695 4985 manager.go:1196] Started watching for new ooms in manager Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.200995 4985 server.go:460] "Adding debug handlers to kubelet server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.202157 4985 manager.go:319] Starting recovery of all containers Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.206904 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207034 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207111 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207201 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207300 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207377 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207451 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207523 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207600 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207685 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207765 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.207838 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208148 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208233 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208340 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208502 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208581 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208658 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208749 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208828 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208903 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.208979 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209057 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209137 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209217 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209310 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209390 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209465 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209540 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209625 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209701 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209773 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209846 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.209920 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210005 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210082 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210152 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210220 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210309 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210384 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210469 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210565 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210649 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210729 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210808 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210894 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.210974 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211070 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211161 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211242 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211356 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211447 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211535 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211717 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.201060 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef7a4e24cefec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,LastTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.211866 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212062 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212086 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212099 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212113 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212126 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212137 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212179 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212191 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212203 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212218 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212230 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212241 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212271 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212288 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212300 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212313 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212329 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212343 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212355 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212368 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212382 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212394 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212409 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212422 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212438 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212451 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212464 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212480 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212495 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212510 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212522 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212535 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212550 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212562 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212574 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212588 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212602 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212613 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212626 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212640 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212652 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212664 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212680 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212693 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212706 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212719 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212733 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212747 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212767 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212781 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212797 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212812 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212826 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212839 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212855 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212870 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212884 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212898 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212915 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212930 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212941 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212954 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212967 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212982 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.212997 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213011 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213023 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213037 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213050 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213066 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213081 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213094 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213106 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213129 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213143 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213157 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213174 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213187 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213200 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213212 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213226 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213238 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213267 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213284 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213298 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213311 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213323 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213338 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213350 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213367 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213380 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213393 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213406 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213420 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213433 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213447 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213462 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213475 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213488 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213500 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213513 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213526 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213564 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213584 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213597 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213610 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213624 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213637 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213649 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213662 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213674 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213686 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213702 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213715 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213727 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213742 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213757 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213771 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213787 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213798 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213809 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213822 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213835 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213846 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213858 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213870 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213884 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213896 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213910 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213922 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213935 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213946 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213957 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213966 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213977 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.213987 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214000 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214011 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.214023 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216137 4985 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216164 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216180 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216192 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216952 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216975 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.216991 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217004 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217018 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217031 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217044 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217059 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217073 4985 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217084 4985 reconstruct.go:97] "Volume reconstruction finished" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.217093 4985 reconciler.go:26] "Reconciler: start to sync state" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.230578 4985 manager.go:324] Recovery completed Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.244766 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.252580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256279 4985 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256388 4985 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.256417 4985 state_mem.go:36] "Initialized new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.259054 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262594 4985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262655 4985 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.262695 4985 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.262871 4985 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 18:13:11 crc kubenswrapper[4985]: W0128 18:13:11.265592 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.265710 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.283994 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.363370 4985 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.384129 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.384560 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.460321 4985 policy_none.go:49] "None policy: Start" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.461594 4985 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.461642 4985 state_mem.go:35] "Initializing new in-memory state store" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.484364 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.564344 4985 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.585297 4985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.664949 4985 manager.go:334] "Starting Device Plugin manager" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.665474 4985 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.665508 4985 server.go:79] "Starting device plugin registration server" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666139 4985 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666160 4985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666427 4985 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666526 4985 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.666544 4985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.691169 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.766749 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.767781 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.768321 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.786031 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.965292 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.965391 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.966989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967186 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.967970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968364 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968451 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968463 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968484 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968415 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.968418 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969586 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969593 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969703 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969724 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.969948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: E0128 18:13:11.969991 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970193 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970283 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970334 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970495 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970942 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.970973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971071 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971089 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:11 crc kubenswrapper[4985]: I0128 18:13:11.971870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.026981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027091 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.027373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128366 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128507 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128467 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128547 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128619 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128745 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.128945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.129045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.148389 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.182501 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 15:20:56.485545204 +0000 UTC Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.208246 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.208472 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.319007 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.333195 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.339278 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.364964 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.365069 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.370535 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371359 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.371760 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.372248 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.376816 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.444753 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c WatchSource:0}: Error finding container 3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c: Status 404 returned error can't find the container with id 3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.454308 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635 WatchSource:0}: Error finding container 447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635: Status 404 returned error can't find the container with id 447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635 Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.460244 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12 WatchSource:0}: Error finding container 0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12: Status 404 returned error can't find the container with id 0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12 Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.461346 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c WatchSource:0}: Error finding container 044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c: Status 404 returned error can't find the container with id 044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.462138 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f WatchSource:0}: Error finding container d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f: Status 404 returned error can't find the container with id d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.586849 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.659148 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.659302 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: W0128 18:13:12.712156 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.712325 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:12 crc kubenswrapper[4985]: I0128 18:13:12.954243 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:12 crc kubenswrapper[4985]: E0128 18:13:12.955350 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.147705 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.173053 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.174950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.175086 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:13 crc kubenswrapper[4985]: E0128 18:13:13.175801 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.182974 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:17:39.63451653 +0000 UTC Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.283648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"044b433110577f9f6d86af0e4f25c3cc9f043fe4b9f89a9aa0e7eeb139034a6c"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.285062 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"447cccc984e8c4acceb497204efff78e4320f6320be8387f3a6d0f95772e0635"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.286341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3dabca12b20e3e6225bcb6e54b01be3faef6f53bb25451609688004b8275f95c"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.288377 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d639668d476d32a5f7c5b3fe7f6606100041f06e458095c3f365ae44dcbe708f"} Jan 28 18:13:13 crc kubenswrapper[4985]: I0128 18:13:13.290321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0cd96db233c3ca92edeb9d45b0051d8aac558d5d1263af9a951a2ba6340c4d12"} Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.148061 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.208981 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 16:08:48.467441226 +0000 UTC Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.209427 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Jan 28 18:13:14 crc kubenswrapper[4985]: W0128 18:13:14.286526 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.286596 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.776051 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:14 crc kubenswrapper[4985]: I0128 18:13:14.779407 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:14 crc kubenswrapper[4985]: E0128 18:13:14.780160 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.007449 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.007601 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.148325 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.209693 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 09:38:26.183363736 +0000 UTC Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.236135 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.236329 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.297913 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.298018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.298141 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.299748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.302712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305350 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305561 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.305624 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307467 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.307487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308466 4985 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308567 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.308638 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.310068 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311244 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311536 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5" exitCode=0 Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311584 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5"} Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.311699 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.312993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.313029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:15 crc kubenswrapper[4985]: I0128 18:13:15.313048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:15 crc kubenswrapper[4985]: W0128 18:13:15.874069 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:15 crc kubenswrapper[4985]: E0128 18:13:15.874172 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.148127 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.210905 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:19:40.689302032 +0000 UTC Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.318002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.318070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.320075 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.320101 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.322331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.322409 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.323988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324026 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.324799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325769 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07" exitCode=0 Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325826 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07"} Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.325859 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.326923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:16 crc kubenswrapper[4985]: I0128 18:13:16.973724 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:16 crc kubenswrapper[4985]: E0128 18:13:16.975522 4985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.148786 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.211102 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 04:01:37.786958139 +0000 UTC Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.332172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.332365 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.333722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.336851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.336876 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.337998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.338039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.338056 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.339969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342812 4985 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5" exitCode=0 Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5"} Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342927 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.342928 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344324 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.344448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: E0128 18:13:17.410688 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.571794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.730720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.731228 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.731363 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.980932 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:17 crc kubenswrapper[4985]: I0128 18:13:17.982193 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:17 crc kubenswrapper[4985]: E0128 18:13:17.982850 4985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.195:6443: connect: connection refused" node="crc" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.147846 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.212033 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 13:13:43.82982188 +0000 UTC Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.352329 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.356937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.356995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0"} Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357005 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357097 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357104 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.357179 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358265 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.358591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:18 crc kubenswrapper[4985]: E0128 18:13:18.641293 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188ef7a4e24cefec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,LastTimestamp:2026-01-28 18:13:11.146573804 +0000 UTC m=+1.973136625,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:13:18 crc kubenswrapper[4985]: I0128 18:13:18.704237 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:18 crc kubenswrapper[4985]: W0128 18:13:18.932044 4985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:18 crc kubenswrapper[4985]: E0128 18:13:18.932156 4985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.195:6443: connect: connection refused" logger="UnhandledError" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.148186 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.212426 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 11:07:23.448605328 +0000 UTC Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.361833 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364166 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" exitCode=255 Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.364398 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.365965 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.366661 4985 scope.go:117] "RemoveContainer" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369378 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8"} Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369469 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.369496 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.370546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:19 crc kubenswrapper[4985]: I0128 18:13:19.371606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130317 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130700 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.130757 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": dial tcp 192.168.126.11:6443: connect: connection refused" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.148137 4985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.213045 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 07:15:44.042354573 +0000 UTC Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.215327 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.373926 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375521 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375642 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375686 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.375643 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:20 crc kubenswrapper[4985]: I0128 18:13:20.376738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.214238 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 14:10:59.533753477 +0000 UTC Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377463 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.377466 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.378639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.379284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:21 crc kubenswrapper[4985]: E0128 18:13:21.691974 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:21 crc kubenswrapper[4985]: I0128 18:13:21.888722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.161600 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.162295 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.164513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.215084 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 06:31:44.389974342 +0000 UTC Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.380781 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.380805 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.382984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.383811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:22 crc kubenswrapper[4985]: I0128 18:13:22.384008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:23 crc kubenswrapper[4985]: I0128 18:13:23.216144 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 12:27:01.375896519 +0000 UTC Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.217069 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 02:38:16.591127887 +0000 UTC Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.383353 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385460 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:24 crc kubenswrapper[4985]: I0128 18:13:24.385517 4985 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 18:13:25 crc kubenswrapper[4985]: I0128 18:13:25.217318 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 03:07:53.978459953 +0000 UTC Jan 28 18:13:25 crc kubenswrapper[4985]: I0128 18:13:25.521082 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.022632 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.023021 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.024927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:26 crc kubenswrapper[4985]: I0128 18:13:26.218096 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:14:32.985380704 +0000 UTC Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.218782 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:03:08.57583181 +0000 UTC Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.738660 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.738812 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.740477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:27 crc kubenswrapper[4985]: I0128 18:13:27.749064 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.218946 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 12:44:41.101007374 +0000 UTC Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.399270 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.400387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.526225 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 18:13:28 crc kubenswrapper[4985]: I0128 18:13:28.526300 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.023158 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.023410 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:13:29 crc kubenswrapper[4985]: I0128 18:13:29.220067 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 23:49:46.769441408 +0000 UTC Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.138333 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.138965 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.140991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.141058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.141078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.146751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.220626 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 14:37:16.427270694 +0000 UTC Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.332014 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.332333 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.334093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.347672 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.405317 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.405349 4985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.406985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.407031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:30 crc kubenswrapper[4985]: I0128 18:13:30.407051 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:31 crc kubenswrapper[4985]: I0128 18:13:31.221812 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 17:10:09.063673005 +0000 UTC Jan 28 18:13:31 crc kubenswrapper[4985]: E0128 18:13:31.692470 4985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 18:13:32 crc kubenswrapper[4985]: I0128 18:13:32.222754 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 22:17:59.647061462 +0000 UTC Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.544476 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 05:40:48.808006848 +0000 UTC Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.824799 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="7s" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855391 4985 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855422 4985 trace.go:236] Trace[54370517]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:21.356) (total time: 12498ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[54370517]: ---"Objects listed" error: 12498ms (18:13:33.855) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[54370517]: [12.498801087s] [12.498801087s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855444 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.855604 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866183 4985 trace.go:236] Trace[1451274034]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:20.828) (total time: 13037ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[1451274034]: ---"Objects listed" error: 13037ms (18:13:33.866) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[1451274034]: [13.037535893s] [13.037535893s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866216 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866686 4985 trace.go:236] Trace[291536343]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 18:13:21.153) (total time: 12713ms): Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[291536343]: ---"Objects listed" error: 12713ms (18:13:33.866) Jan 28 18:13:33 crc kubenswrapper[4985]: Trace[291536343]: [12.713250654s] [12.713250654s] END Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.866716 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.870838 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.875294 4985 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.875425 4985 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.876773 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.916342 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.923160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.936854 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941036 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51904->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941107 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:51904->192.168.126.11:17697: read: connection reset by peer" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941566 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941616 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941856 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.941882 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.943804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.954433 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958040 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.958085 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.967059 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.970736 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.980737 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:33 crc kubenswrapper[4985]: E0128 18:13:33.980913 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982646 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:33 crc kubenswrapper[4985]: I0128 18:13:33.982680 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:33Z","lastTransitionTime":"2026-01-28T18:13:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.085833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.138016 4985 apiserver.go:52] "Watching apiserver" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.172808 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.173330 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.173925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174051 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174150 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174316 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174427 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174667 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.174745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.174913 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.176750 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.177753 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.178124 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.178439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179319 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179465 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179735 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.179787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.183507 4985 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.183863 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.188714 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.227039 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.245993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258192 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258296 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258403 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258530 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258552 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258773 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258800 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258842 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258842 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258881 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258900 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258935 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.258990 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259009 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259057 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259074 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259141 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259167 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259237 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259279 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259318 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259366 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259387 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259403 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259423 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259443 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259464 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259482 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259520 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259606 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259665 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259653 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259724 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259812 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259855 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259933 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259966 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.259989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260007 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260031 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260067 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260115 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260125 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260177 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260294 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260314 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260385 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260398 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260415 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260422 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260450 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260477 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260507 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260534 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260559 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260586 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260616 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260646 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260733 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260761 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260868 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260892 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260917 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260944 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260968 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260994 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261014 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261038 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261063 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261117 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261163 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261187 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261212 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261263 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261302 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261354 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261376 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261399 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261423 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260762 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.260886 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261020 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261039 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261162 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261340 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261558 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.261584 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.761546196 +0000 UTC m=+25.588109047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261597 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.261257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263125 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.262816 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263041 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263573 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263690 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263757 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.263848 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264023 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264138 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264256 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264292 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264300 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264506 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264536 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264556 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.264669 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265238 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265258 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265979 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.265999 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266264 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266431 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266746 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266812 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.266859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267135 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267411 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267731 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.267953 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268029 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268794 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268959 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.268964 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269016 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269079 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269116 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269519 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269541 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269567 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269650 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269692 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269710 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269749 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269771 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269822 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269904 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269924 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269904 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269943 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270095 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270147 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270186 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270215 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270243 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270297 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270324 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270353 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270406 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270473 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270526 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270552 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270580 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270636 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270673 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270725 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270751 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270780 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270888 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270913 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270938 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270978 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271003 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271059 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271119 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271185 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271227 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271258 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271372 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271398 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271451 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271536 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271660 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271868 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271929 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272086 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272104 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272118 4985 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272134 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272150 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272164 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272179 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272194 4985 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272208 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272221 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272236 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272251 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272302 4985 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272318 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272334 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272347 4985 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272362 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272375 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272391 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272408 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272424 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272441 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272454 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272470 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272501 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269205 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269247 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269357 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269951 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.269970 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.270890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271108 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.271519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272033 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272100 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.272712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273498 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.274031 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.274234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.273085 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.275524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276006 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276569 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.276789 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277129 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277254 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277491 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277754 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.277986 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.278668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.278988 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279286 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279636 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.279717 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280536 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280863 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.280971 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281171 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281545 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.281976 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282074 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282528 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.282610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283205 4985 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283566 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283586 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.283690 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284044 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284113 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.78409521 +0000 UTC m=+25.610658031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284714 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284748 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284705 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.284806 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.284841 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.78481582 +0000 UTC m=+25.611378861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285315 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285501 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285329 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.285865 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286162 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.286570 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.290486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293837 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.293907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294350 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.294575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.295317 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.301238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301551 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301575 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.301567 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301592 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301847 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.801828521 +0000 UTC m=+25.628391342 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301625 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301884 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301895 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.301921 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:34.801915203 +0000 UTC m=+25.628478014 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302110 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.302742 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.303623 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.303766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.304870 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.305908 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.305948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.306289 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.306436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.311905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.313057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315108 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315122 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315200 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315600 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315818 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.316548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.315137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.317886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318658 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.318951 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319877 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319974 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.319991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320607 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320643 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.320773 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.321341 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.321977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323387 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323465 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.323711 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325590 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.325764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.326683 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.329625 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.331964 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.332576 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.346407 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.346760 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.373949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374111 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374157 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374210 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374229 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374243 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374299 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374309 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374519 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374591 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.374973 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375536 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375557 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375567 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375970 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.375981 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.376211 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.376222 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377217 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377567 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377727 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.377991 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378303 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378367 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378449 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378525 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378609 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378622 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378633 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378643 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.378653 4985 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379034 4985 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379115 4985 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.379543 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381622 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381888 4985 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381953 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381968 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.381996 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382022 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382035 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.382046 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383458 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383495 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383521 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383542 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383565 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383582 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383596 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383616 4985 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383630 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383644 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383658 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383678 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383691 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383705 4985 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383720 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383738 4985 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383753 4985 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383767 4985 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383787 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383813 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383826 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383838 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383885 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383899 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383911 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383923 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383940 4985 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383977 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.383990 4985 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384001 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384021 4985 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384033 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384045 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384061 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384100 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384117 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384132 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384152 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384191 4985 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384206 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384220 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384238 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384286 4985 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384302 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384319 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384331 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384368 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384382 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384398 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384410 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384465 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384476 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384495 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384507 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384542 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384555 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384572 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384584 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384621 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384639 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384650 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384661 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384672 4985 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384707 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384720 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384731 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384745 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384780 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384795 4985 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384810 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384824 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384864 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384879 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384892 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384908 4985 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384925 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384959 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384971 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.384989 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385001 4985 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385035 4985 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385047 4985 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385064 4985 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385075 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385088 4985 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385126 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385138 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385150 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385162 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385195 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385209 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385220 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385232 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385282 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385296 4985 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385313 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385325 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385365 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385381 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385396 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385413 4985 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385444 4985 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385460 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385475 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385493 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385509 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385548 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385561 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385578 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385589 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385625 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385641 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385654 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385666 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385702 4985 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385719 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385729 4985 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385741 4985 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385752 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385791 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385803 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385814 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385825 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385861 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385875 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.385886 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.393792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.395585 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.404728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.418022 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487326 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487376 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.487396 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.502834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.507655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.516668 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692 WatchSource:0}: Error finding container 954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692: Status 404 returned error can't find the container with id 954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692 Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.518230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.531580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.535845 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa WatchSource:0}: Error finding container 3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa: Status 404 returned error can't find the container with id 3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.545422 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 10:04:21.354606893 +0000 UTC Jan 28 18:13:34 crc kubenswrapper[4985]: W0128 18:13:34.549564 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb WatchSource:0}: Error finding container 24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb: Status 404 returned error can't find the container with id 24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.611399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.714572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.790386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790428 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.790381512 +0000 UTC m=+26.616944373 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790523 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790573 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790603 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.790584667 +0000 UTC m=+26.617147618 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.790715 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.7906882 +0000 UTC m=+26.617251121 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.816881 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.891431 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.891511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891694 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891724 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891744 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.891818 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.89179565 +0000 UTC m=+26.718358511 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892232 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892310 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892330 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: E0128 18:13:34.892415 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:35.892393667 +0000 UTC m=+26.718956498 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.919938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.919999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:34 crc kubenswrapper[4985]: I0128 18:13:34.920061 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:34Z","lastTransitionTime":"2026-01-28T18:13:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.022822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.126487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.230946 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.270069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.270589 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.271355 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.271942 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334140 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.334178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.425908 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.426551 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.435447 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" exitCode=255 Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.437551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.446437 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.447345 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.448568 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.450471 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.451643 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.453156 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.484804 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.485806 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.540979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.541081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.546175 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 14:26:51.213443775 +0000 UTC Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.546859 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.547887 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.617683 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.618466 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.644997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.645021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.645035 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.747806 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.799898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.800015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.800050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800153 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800184 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800144377 +0000 UTC m=+28.626707218 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800237 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800224559 +0000 UTC m=+28.626787520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800395 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.800447 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.800438305 +0000 UTC m=+28.627001136 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.850954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.851123 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.878367 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.878823 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.879620 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.880214 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.880707 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.881256 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.881777 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.882473 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.882917 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.883607 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.884276 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.885450 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.886132 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.886687 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.887240 4985 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.887418 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.888807 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.900317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.900380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900463 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900486 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900499 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900561 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.900544267 +0000 UTC m=+28.727107088 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900661 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900719 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900743 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: E0128 18:13:35.900833 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:37.900804974 +0000 UTC m=+28.727367835 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953752 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:35 crc kubenswrapper[4985]: I0128 18:13:35.953809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:35Z","lastTransitionTime":"2026-01-28T18:13:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.054375 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.055920 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.056220 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.060583 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.092139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.093927 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.095587 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.097147 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.099963 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.102581 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.103693 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.104392 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.104921 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.105542 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.106069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.107717 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.108714 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.109300 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.110290 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.110878 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.111984 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.112586 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113151 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"24d3f0d27159e20caf8fe78a5888ed66791a2f6c90e8acd59af0d337112c26eb"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a93b0123b7822ad929b9d678b628398e9494e3b0d5796f8c0d14e9c2e51d3aa"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"954dfbe20458f2a276ef5b967ea13a5b9e9aba3d9c2e1d94ec51df169d549692"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.113369 4985 scope.go:117] "RemoveContainer" containerID="50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.118560 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.129829 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.141064 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.153096 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.159099 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.166395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.180215 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.190483 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.199354 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.211729 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.223084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.232205 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.246178 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.256531 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.262136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263223 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.263223 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263374 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.263660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.309835 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.310796 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.311024 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.311290 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.365557 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.441356 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.448579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.455510 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.458851 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.459229 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.459820 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: E0128 18:13:36.465668 4985 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.468714 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.480549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.492412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.501168 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.512108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.520239 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.530353 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://50c7c1874aa8d1bddf5c1a8a85bf187572aa21fe849a04e4c4c0b5ddba7b00fc\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:19Z\\\",\\\"message\\\":\\\"W0128 18:13:18.585836 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 18:13:18.586705 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769623998 cert, and key in /tmp/serving-cert-3647538429/serving-signer.crt, /tmp/serving-cert-3647538429/serving-signer.key\\\\nI0128 18:13:18.896551 1 observer_polling.go:159] Starting file observer\\\\nW0128 18:13:18.981716 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:18.981881 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:18.988226 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-3647538429/tls.crt::/tmp/serving-cert-3647538429/tls.key\\\\\\\"\\\\nF0128 18:13:19.174577 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.539836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.546992 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:37:14.156060105 +0000 UTC Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.550010 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.559226 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.569598 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.571848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.583153 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.594594 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.603357 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.614018 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.624409 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.675350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.778889 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.882158 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:36 crc kubenswrapper[4985]: I0128 18:13:36.985167 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:36Z","lastTransitionTime":"2026-01-28T18:13:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.088148 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.190919 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.293550 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396667 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.396729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.499822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.547443 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 22:33:29.784992068 +0000 UTC Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.602601 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705261 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.705323 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.808520 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.808963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.809241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820277 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.820581 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.820747 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.820901 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.820878178 +0000 UTC m=+32.647441009 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821498 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.821484705 +0000 UTC m=+32.648047526 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821686 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.821819 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.821792764 +0000 UTC m=+32.648355585 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.911716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.912464 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:37Z","lastTransitionTime":"2026-01-28T18:13:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.921779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:37 crc kubenswrapper[4985]: I0128 18:13:37.921937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922203 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922344 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922438 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.922572 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.922553414 +0000 UTC m=+32.749116245 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923314 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923466 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923557 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:37 crc kubenswrapper[4985]: E0128 18:13:37.923687 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:41.923664925 +0000 UTC m=+32.750227766 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.015859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118080 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.118165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.220995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.221005 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263728 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.263772 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.263749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.264066 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.264183 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.323490 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.426454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.466868 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.480672 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.494876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.506691 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.520820 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.529204 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.532175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.544807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.548270 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 08:32:13.696351274 +0000 UTC Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.562117 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.578529 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632538 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.632648 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.735415 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.779894 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.780792 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:38 crc kubenswrapper[4985]: E0128 18:13:38.781004 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838777 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.838875 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:38 crc kubenswrapper[4985]: I0128 18:13:38.941436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:38Z","lastTransitionTime":"2026-01-28T18:13:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.044242 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.147208 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.249986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.250154 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.352998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.353010 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.456399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.549481 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 03:20:44.131033658 +0000 UTC Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.559153 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.662104 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.765584 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.868970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.869080 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.967854 4985 csr.go:261] certificate signing request csr-mk7bs is approved, waiting to be issued Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.971959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:39 crc kubenswrapper[4985]: I0128 18:13:39.972422 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:39Z","lastTransitionTime":"2026-01-28T18:13:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.014229 4985 csr.go:257] certificate signing request csr-mk7bs is issued Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075616 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.075660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.178555 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263518 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263533 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264144 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:40 crc kubenswrapper[4985]: E0128 18:13:40.264081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.281143 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.383644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417128 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-g2g4k"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417562 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-9xm27"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417751 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.417802 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.419648 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-6j9qp"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.420399 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421102 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421142 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421301 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.421371 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422048 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422540 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.422749 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.423016 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.423859 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.425810 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.441157 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443792 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443815 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443838 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443857 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443876 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443918 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443939 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443959 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.443985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444006 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444120 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444155 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444176 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444232 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444278 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.444361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.459996 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.472956 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.486320 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.489180 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.502591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.519004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.535573 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544920 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544955 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.544993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545010 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545028 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545045 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545073 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545095 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cnibin\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-kubelet\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545141 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-system-cni-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-hostroot\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545184 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-etc-kubernetes\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-bin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545199 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-var-lib-cni-multus\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545249 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-conf-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545352 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545374 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545451 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545473 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545518 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545542 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545587 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-daemon-config\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cni-binary-copy\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.545950 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-cnibin\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546006 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546059 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/82fb0eec-adf5-4743-979d-6b7bf729e4f5-cni-binary-copy\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-socket-dir-parent\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-hosts-file\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-multus-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-k8s-cni-cncf-io\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-system-cni-dir\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-os-release\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-multus-certs\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-host-run-netns\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.546185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-os-release\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.547594 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/82fb0eec-adf5-4743-979d-6b7bf729e4f5-tuning-conf-dir\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.549857 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 21:52:21.224977285 +0000 UTC Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.561048 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz4mz\" (UniqueName: \"kubernetes.io/projected/1301b014-a9ed-4b29-8dc2-86c01d6bd13a-kube-api-access-xz4mz\") pod \"node-resolver-9xm27\" (UID: \"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\") " pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2r9\" (UniqueName: \"kubernetes.io/projected/82fb0eec-adf5-4743-979d-6b7bf729e4f5-kube-api-access-qj2r9\") pod \"multus-additional-cni-plugins-6j9qp\" (UID: \"82fb0eec-adf5-4743-979d-6b7bf729e4f5\") " pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.570275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhcbz\" (UniqueName: \"kubernetes.io/projected/14fdd73a-b8dd-42da-88b4-2ccb314c4f7a-kube-api-access-xhcbz\") pod \"multus-g2g4k\" (UID: \"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\") " pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.578605 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.588732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.595178 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.614559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.657310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.683309 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.691653 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.695228 4985 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696170 4985 reflector.go:484] object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.696316 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-operator/pods/network-operator-58b4c7f79c-55gtf/status\": read tcp 38.102.83.195:51400->38.102.83.195:6443: use of closed network connection" Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696864 4985 reflector.go:484] object-"openshift-multus"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696900 4985 reflector.go:484] object-"openshift-multus"/"default-dockercfg-2q5b6": watch of *v1.Secret ended with: very short watch: object-"openshift-multus"/"default-dockercfg-2q5b6": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.696952 4985 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697155 4985 reflector.go:484] object-"openshift-multus"/"cni-copy-resources": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"cni-copy-resources": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697193 4985 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697207 4985 reflector.go:484] object-"openshift-multus"/"default-cni-sysctl-allowlist": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"default-cni-sysctl-allowlist": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697239 4985 reflector.go:484] object-"openshift-multus"/"multus-daemon-config": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"multus-daemon-config": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697282 4985 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: W0128 18:13:40.697615 4985 reflector.go:484] object-"openshift-multus"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-multus"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.725701 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.736190 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g2g4k" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.747292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9xm27" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.755575 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.766518 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.793590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.799422 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-rmr8h"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.799822 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.800673 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.802592 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803344 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803733 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803863 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.803950 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.804708 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.804750 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.806874 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807430 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807545 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807679 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807743 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807806 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.807850 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.819125 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.835050 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.846472 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848604 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848708 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848821 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848920 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.848983 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849003 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849043 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.849516 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.864945 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.884644 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.905492 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:40Z","lastTransitionTime":"2026-01-28T18:13:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.917480 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.934013 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.946861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951340 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951359 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951392 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951535 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951679 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951694 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951709 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.951831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952335 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-rootfs\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-mcd-auth-proxy-config\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952605 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952634 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.952722 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953079 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953171 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.953216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954073 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954177 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954694 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.954804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.955305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.959174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.960827 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-proxy-tls\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.969745 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.972572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ba791a5a-08bb-4a97-a4e4-9b0e06bac324-kube-api-access-fsgxm\") pod \"machine-config-daemon-rmr8h\" (UID: \"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\") " pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.974649 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"ovnkube-node-zd8w7\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:40 crc kubenswrapper[4985]: I0128 18:13:40.988726 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008000 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.008134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.016380 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 18:08:39 +0000 UTC, rotation deadline is 2026-12-17 14:03:38.867978967 +0000 UTC Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.016467 4985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7747h49m57.8515148s for next certificate rotation Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.023873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.039954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.054371 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.071703 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.090047 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.106131 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111492 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.111507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.185064 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.195345 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:41 crc kubenswrapper[4985]: W0128 18:13:41.197566 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084 WatchSource:0}: Error finding container 9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084: Status 404 returned error can't find the container with id 9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084 Jan 28 18:13:41 crc kubenswrapper[4985]: W0128 18:13:41.208078 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd7b8cde_d2fe_4842_857e_545172f5bd12.slice/crio-9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3 WatchSource:0}: Error finding container 9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3: Status 404 returned error can't find the container with id 9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3 Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215532 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.215544 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.277784 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.292395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.311175 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.318590 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.325694 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.336758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.352407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.366625 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.383864 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.397663 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.413981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.422620 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.429936 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.448994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.467287 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.478008 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.478312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"83cfa349ea19eeb2ba4ee6c3e38baa19feef8e50da4261b453c9b301fec5d3a4"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.479429 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.479463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"8fa85938472cd987d53b9e4dfedafa96704cdaea57e22ced6e351648516dd147"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.480725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9xm27" event={"ID":"1301b014-a9ed-4b29-8dc2-86c01d6bd13a","Type":"ContainerStarted","Data":"b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.480759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9xm27" event={"ID":"1301b014-a9ed-4b29-8dc2-86c01d6bd13a","Type":"ContainerStarted","Data":"283ea2a50827490d010f9f715abf8898212189783504eb80387cce3f532818c9"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482748 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" exitCode=0 Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.482920 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.484536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.484656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"9b52ae7410e044c3d62e8ba4c47f080402d59995b57fa01be5e4289793202084"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.496154 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.507240 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.520439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.521989 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525042 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525106 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.525118 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.537015 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.550144 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 02:03:42.983412122 +0000 UTC Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.557110 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.572277 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.588955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.591059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.601341 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.608091 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.627633 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.636271 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.657437 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.695974 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.709795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.730991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.731006 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.731656 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.746109 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.758041 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.778557 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.792453 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.804406 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.812989 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.817130 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.828877 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833904 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.833982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.834046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.834105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.845955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.857718 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861071 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861279 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861235462 +0000 UTC m=+40.687798323 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861361 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.861422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861429 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861409517 +0000 UTC m=+40.687972428 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861525 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.861577 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.861566271 +0000 UTC m=+40.688129172 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.871127 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.888828 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.920558 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.936833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:41Z","lastTransitionTime":"2026-01-28T18:13:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.950715 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.961969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.962021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962146 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962166 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962177 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962229 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.962215879 +0000 UTC m=+40.788778700 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962308 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962364 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962391 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: E0128 18:13:41.962490 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:49.962463116 +0000 UTC m=+40.789025977 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:41 crc kubenswrapper[4985]: I0128 18:13:41.980763 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.012612 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.020565 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.039544 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.067192 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.143717 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.246552 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263949 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264246 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:42 crc kubenswrapper[4985]: E0128 18:13:42.264424 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.269969 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.283945 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.349782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.350454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.453277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.491433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.493235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.494678 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73" exitCode=0 Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.494755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.507322 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.519401 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.541172 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.550547 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 03:48:11.075731855 +0000 UTC Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.555888 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.557186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.577439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.597621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.611566 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.633418 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661287 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.661642 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.673310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.683964 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.696727 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.709214 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.722497 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.731717 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.743275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.765138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.768954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.809200 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.851352 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.868850 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.894347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.930775 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.971105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:42Z","lastTransitionTime":"2026-01-28T18:13:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:42 crc kubenswrapper[4985]: I0128 18:13:42.972489 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:42Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.012841 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.053857 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073866 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.073883 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.095423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.142470 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.177754 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.280991 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.383960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.383994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.384029 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.443969 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-dlz95"] Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.444466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.448933 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.450211 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.451050 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.451197 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.465339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.474765 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.483986 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.486436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.500655 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.503652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.503708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.507510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.521392 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.532095 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.545139 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.550920 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 15:25:56.393115825 +0000 UTC Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.558949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.574589 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575422 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.575497 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/fc08b2fa-f391-4427-b450-d72953c4056b-host\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.576704 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/fc08b2fa-f391-4427-b450-d72953c4056b-serviceca\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.589454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.606820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrg9g\" (UniqueName: \"kubernetes.io/projected/fc08b2fa-f391-4427-b450-d72953c4056b-kube-api-access-lrg9g\") pod \"node-ca-dlz95\" (UID: \"fc08b2fa-f391-4427-b450-d72953c4056b\") " pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.621455 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.641238 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.684539 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.693527 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.707196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.750052 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.759699 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-dlz95" Jan 28 18:13:43 crc kubenswrapper[4985]: W0128 18:13:43.779763 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc08b2fa_f391_4427_b450_d72953c4056b.slice/crio-6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701 WatchSource:0}: Error finding container 6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701: Status 404 returned error can't find the container with id 6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701 Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.791174 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.795985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.796004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.796017 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.832202 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.869092 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.902208 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:43Z","lastTransitionTime":"2026-01-28T18:13:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.917139 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.949339 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:43 crc kubenswrapper[4985]: I0128 18:13:43.992278 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:43Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.005127 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.029533 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.072015 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.110277 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.112738 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115243 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.115276 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.133882 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.137236 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.152842 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.153509 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.157922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.170139 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.173931 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.190060 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.191320 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.194504 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.206189 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.206352 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.212859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.229634 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.263998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.264046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.264074 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264172 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264301 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:44 crc kubenswrapper[4985]: E0128 18:13:44.264415 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.270058 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.309062 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.315970 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.350457 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.418996 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.513204 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlz95" event={"ID":"fc08b2fa-f391-4427-b450-d72953c4056b","Type":"ContainerStarted","Data":"7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.513625 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-dlz95" event={"ID":"fc08b2fa-f391-4427-b450-d72953c4056b","Type":"ContainerStarted","Data":"6381b1fc62a6cf1f7a638a66ea8c21cb79b21eb32a67f421d5f93aeefe963701"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.525348 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.527355 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540" exitCode=0 Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.527412 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.531803 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.545946 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.551752 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 21:24:45.672538887 +0000 UTC Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.558658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.575590 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.597774 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.610018 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.627991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.628010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.628022 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.630147 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.668283 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.712026 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.730778 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.751442 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.804170 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.832981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.834972 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.872395 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.912674 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.937898 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:44Z","lastTransitionTime":"2026-01-28T18:13:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.954742 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:44 crc kubenswrapper[4985]: I0128 18:13:44.994957 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:44Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.028618 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.040535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.041068 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.069588 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.112482 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.144163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.149740 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.192101 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.233124 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.247823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.271196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.327443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.351712 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.353137 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.392632 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.432474 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.455647 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.473640 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.543795 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0" exitCode=0 Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.543851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.552414 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 13:30:17.154967493 +0000 UTC Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.559594 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.561795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.581268 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.600582 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.631791 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.663332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.673684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.713758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.749088 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.766329 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.788686 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.830299 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.870100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.869867 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.870235 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.911070 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.955412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.972954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.973049 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:45Z","lastTransitionTime":"2026-01-28T18:13:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:45 crc kubenswrapper[4985]: I0128 18:13:45.988549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.034993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.076211 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179287 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.179334 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263435 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263447 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263620 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.263472 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263881 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:46 crc kubenswrapper[4985]: E0128 18:13:46.263902 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.283421 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.386975 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.490229 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.551431 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee" exitCode=0 Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.552528 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 21:28:03.139247913 +0000 UTC Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.552451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.558098 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.574185 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.592708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.596705 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.614775 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.630042 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.648179 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.676282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.694512 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.696514 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.716726 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.730845 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.758206 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.778874 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.799697 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.802479 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.819506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.833317 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:46Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:46 crc kubenswrapper[4985]: I0128 18:13:46.906275 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:46Z","lastTransitionTime":"2026-01-28T18:13:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.009908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.010443 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.112879 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.215870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.320206 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.423441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.526635 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.553198 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 08:34:21.236578976 +0000 UTC Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.565445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.579958 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.596568 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.610226 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.624524 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629090 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.629136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.639860 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.655983 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.675654 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.687739 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.701330 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.711443 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.729079 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.731883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.731996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.732072 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.743887 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.765245 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.776960 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:47Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834507 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.834530 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:47 crc kubenswrapper[4985]: I0128 18:13:47.938515 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:47Z","lastTransitionTime":"2026-01-28T18:13:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042168 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.042217 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.145181 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250040 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.250151 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.263208 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.263751 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.263937 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:48 crc kubenswrapper[4985]: E0128 18:13:48.264157 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.358519 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.462974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.554418 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:12:12.041127568 +0000 UTC Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.566722 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.584012 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb" exitCode=0 Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.584191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.592628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593579 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593645 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.593675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.605672 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.624133 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.628830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.630311 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.643932 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.665040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673886 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.673977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.682221 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.697382 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.721192 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.735419 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.748642 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.759580 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.775174 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777960 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.777972 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.790232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.807456 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.818859 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.831323 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.844953 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.858010 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.870045 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.880979 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.887564 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.898384 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.908992 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.921241 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.936959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.951995 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.966657 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.982222 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:48Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:48 crc kubenswrapper[4985]: I0128 18:13:48.984644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:48Z","lastTransitionTime":"2026-01-28T18:13:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.008614 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.018526 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.100687 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.203777 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.306984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.307026 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.413890 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.517742 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.554654 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 15:47:45.771917445 +0000 UTC Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.621467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.632913 4985 generic.go:334] "Generic (PLEG): container finished" podID="82fb0eec-adf5-4743-979d-6b7bf729e4f5" containerID="1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2" exitCode=0 Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.634653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerDied","Data":"1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.658355 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.677549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.695628 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.713423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.723976 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.731785 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.746882 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.762108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.781695 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.797490 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.811179 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.823684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.831152 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.837663 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.859107 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.870997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:49Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.934178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:49Z","lastTransitionTime":"2026-01-28T18:13:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.958762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.958973 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.95893794 +0000 UTC m=+56.785500771 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.959061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:49 crc kubenswrapper[4985]: I0128 18:13:49.959102 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959247 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959283 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959350 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.959331111 +0000 UTC m=+56.785893952 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:13:49 crc kubenswrapper[4985]: E0128 18:13:49.959417 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:05.959387443 +0000 UTC m=+56.785950294 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.036952 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.059940 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.060005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060144 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060165 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060176 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060226 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:06.060212405 +0000 UTC m=+56.886775226 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060610 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060621 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060629 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.060650 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:06.060643547 +0000 UTC m=+56.887206368 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.140211 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.243831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263146 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263226 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.263300 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263406 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263603 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:50 crc kubenswrapper[4985]: E0128 18:13:50.263786 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.351621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352551 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.352723 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.456463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.555403 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 10:19:25.077420959 +0000 UTC Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.560429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.642019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" event={"ID":"82fb0eec-adf5-4743-979d-6b7bf729e4f5","Type":"ContainerStarted","Data":"9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.658072 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662729 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.662777 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.671515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.688790 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.703949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.721275 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.738772 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.754795 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.765997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.766009 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.773470 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.791097 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.806909 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.838149 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.854959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.868619 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.874978 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.890350 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:50Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.970994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971111 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:50 crc kubenswrapper[4985]: I0128 18:13:50.971132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:50Z","lastTransitionTime":"2026-01-28T18:13:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.074328 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.177644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.280813 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.286143 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.301925 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.324981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.344146 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.363632 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.383599 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.384487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.409543 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.425903 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.443001 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.464304 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.477060 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.486944 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.494861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.509978 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.522627 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.556015 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 21:41:56.310940475 +0000 UTC Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.592707 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.694997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.797630 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901197 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901209 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:51 crc kubenswrapper[4985]: I0128 18:13:51.901239 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:51Z","lastTransitionTime":"2026-01-28T18:13:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.004214 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.107622 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210110 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.210135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263618 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.263760 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.263834 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.263926 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:52 crc kubenswrapper[4985]: E0128 18:13:52.264014 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.314160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417529 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.417569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.520804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.556227 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 03:40:01.481925316 +0000 UTC Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.623453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.727197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.830591 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:52 crc kubenswrapper[4985]: I0128 18:13:52.934303 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:52Z","lastTransitionTime":"2026-01-28T18:13:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.038414 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.142294 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.245938 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.264418 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.297128 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5"] Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.297930 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.300641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.305443 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.319243 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.339674 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.349352 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.360948 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.379931 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.397851 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.403735 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.420310 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.432873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.447982 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.451725 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.467786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.479869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.496032 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.498880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.498956 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499004 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.499916 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-env-overrides\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.500294 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.508695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/300be08e-8565-45ad-a77e-ac1b90ff61e7-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.531468 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.535906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfjql\" (UniqueName: \"kubernetes.io/projected/300be08e-8565-45ad-a77e-ac1b90ff61e7-kube-api-access-dfjql\") pod \"ovnkube-control-plane-749d76644c-xvwg5\" (UID: \"300be08e-8565-45ad-a77e-ac1b90ff61e7\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.550810 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.555414 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.556421 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 04:37:32.82175083 +0000 UTC Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.571669 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.588024 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.620167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657169 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.657235 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.658294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"6e1cfe4fa0b27db4e6877b96a42c166a369da79cb02f1b71332dffbf069e637f"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.660198 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.662857 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" exitCode=1 Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.662882 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.663485 4985 scope.go:117] "RemoveContainer" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.684429 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.703681 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.720204 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.734854 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.753156 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.759994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.760111 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.785600 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.802030 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.820563 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.835369 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.849349 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.865892 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.868286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.884375 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.902489 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.915918 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.932935 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:53Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:53 crc kubenswrapper[4985]: I0128 18:13:53.971150 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:53Z","lastTransitionTime":"2026-01-28T18:13:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.074907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178839 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.178861 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263611 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263650 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.263620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.263814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.263963 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.264168 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.282754 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386550 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.386572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.410523 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.437281 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.443245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.443454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.444556 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.472959 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478657 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.478804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.502337 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.507322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.526906 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.532751 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.547842 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.548011 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.549988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550018 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.550059 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.556763 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 14:46:34.447603228 +0000 UTC Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653523 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.653536 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.676870 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.680640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.683284 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.685589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.756853 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.844776 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.845781 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: E0128 18:13:54.845884 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865350 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.865778 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.885661 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.898633 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.922879 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.924660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.924846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.943413 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.962993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.968753 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:54Z","lastTransitionTime":"2026-01-28T18:13:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.981069 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:54 crc kubenswrapper[4985]: I0128 18:13:54.998620 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:54Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.016758 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.026395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.026470 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.026681 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.026808 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:55.526779949 +0000 UTC m=+46.353342860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.030184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.042126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.044479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6nz\" (UniqueName: \"kubernetes.io/projected/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-kube-api-access-ql6nz\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.053043 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.069995 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.071574 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.080148 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.094340 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.107996 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.173886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275969 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.275995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.276011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.378432 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.480992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481439 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.481490 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.531812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.532000 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: E0128 18:13:55.532069 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:56.532052872 +0000 UTC m=+47.358615693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.557487 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:03:46.167743614 +0000 UTC Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.584993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.585007 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.585016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.688453 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691292 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691332 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" event={"ID":"300be08e-8565-45ad-a77e-ac1b90ff61e7","Type":"ContainerStarted","Data":"5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.691630 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.712126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.728113 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.744282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.754876 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.765721 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.781751 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.791566 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.796888 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.812423 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.828131 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.842091 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.855911 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.895213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.896927 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.925582 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.980955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.991439 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:55Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:55 crc kubenswrapper[4985]: I0128 18:13:55.999730 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:55Z","lastTransitionTime":"2026-01-28T18:13:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.003890 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.017459 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.032935 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.047461 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.063232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.082325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.094141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.103387 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.108504 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.120965 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.135428 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.147362 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.162356 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.177869 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.200325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206239 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.206318 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.218373 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.233311 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.251566 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264034 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264155 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.264067 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264335 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264451 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264580 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.264682 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.308964 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412070 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412108 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.412145 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.515245 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.542310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.542535 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.542622 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:13:58.542597978 +0000 UTC m=+49.369160839 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.558527 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 14:00:01.722522511 +0000 UTC Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618616 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.618707 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.697446 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.698121 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/0.log" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701140 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" exitCode=1 Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701184 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.701290 4985 scope.go:117] "RemoveContainer" containerID="7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.702468 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:13:56 crc kubenswrapper[4985]: E0128 18:13:56.702756 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.721135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.732348 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.748794 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.770408 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.787596 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.814321 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.824457 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.836852 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.852076 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.867140 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.883411 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.900040 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.915477 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.927129 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:56Z","lastTransitionTime":"2026-01-28T18:13:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.939983 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7282c732cd6d241491eca0a5b764a86fdc171691fd866cebcc71ffab483fb5af\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"message\\\":\\\"go:160\\\\nI0128 18:13:52.242611 6253 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:13:52.243400 6253 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 18:13:52.244091 6253 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:13:52.244143 6253 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:13:52.244165 6253 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 18:13:52.244176 6253 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:13:52.244181 6253 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 18:13:52.244193 6253 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 18:13:52.244200 6253 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:13:52.244211 6253 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 18:13:52.244192 6253 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 18:13:52.244228 6253 factory.go:656] Stopping watch factory\\\\nI0128 18:13:52.244239 6253 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:47Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.953054 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.966810 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.975889 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:56 crc kubenswrapper[4985]: I0128 18:13:56.988712 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.029661 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.132760 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236807 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.236829 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.340226 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443883 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443959 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.443987 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.444021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.444045 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.547654 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.558804 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 06:51:37.856259495 +0000 UTC Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.650740 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.707909 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.713666 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:13:57 crc kubenswrapper[4985]: E0128 18:13:57.714141 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.735344 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.754826 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.755557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.755732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.756401 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.791624 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.807766 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.839032 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.856059 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.860978 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.872715 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.897526 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.915747 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.936116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.949382 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.964232 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965522 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.965696 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:57Z","lastTransitionTime":"2026-01-28T18:13:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.978722 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:57 crc kubenswrapper[4985]: I0128 18:13:57.996046 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:57Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.009906 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.026639 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:13:58Z is after 2025-08-24T17:21:41Z" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.069970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.070716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.070812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.071057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.071163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174447 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.174466 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268357 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268391 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268543 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.268619 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268789 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.268995 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.278764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.381925 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.485568 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.559594 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:53:43.968095787 +0000 UTC Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.572738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.572968 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:58 crc kubenswrapper[4985]: E0128 18:13:58.573091 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:02.57306474 +0000 UTC m=+53.399627591 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.588479 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.691932 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794838 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.794897 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:58 crc kubenswrapper[4985]: I0128 18:13:58.898629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:58Z","lastTransitionTime":"2026-01-28T18:13:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.000892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.103731 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206645 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.206734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309193 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.309331 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.412726 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515940 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.515953 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.560379 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 23:32:11.053176374 +0000 UTC Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620300 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.620352 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.725902 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.833949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.834076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.937990 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:13:59 crc kubenswrapper[4985]: I0128 18:13:59.938105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:13:59Z","lastTransitionTime":"2026-01-28T18:13:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.042632 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.145853 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.249687 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.263984 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264010 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264173 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264325 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.264360 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264480 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:00 crc kubenswrapper[4985]: E0128 18:14:00.264824 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.353784 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.456906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.457872 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560533 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:28:59.633906578 +0000 UTC Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560847 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.560884 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.663999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.664033 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.768237 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871290 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871311 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.871358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:00 crc kubenswrapper[4985]: I0128 18:14:00.974426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:00Z","lastTransitionTime":"2026-01-28T18:14:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.077729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.181232 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.280559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.284314 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.299567 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.313807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.335902 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.355515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.376053 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.387129 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.393106 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.410997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.427926 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.444564 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.456017 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.467894 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.479213 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489301 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.489321 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.491807 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.509549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.524893 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.560892 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 23:04:44.288635943 +0000 UTC Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.592444 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.695569 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.798452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902189 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:01 crc kubenswrapper[4985]: I0128 18:14:01.902768 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:01Z","lastTransitionTime":"2026-01-28T18:14:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.006721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110279 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.110323 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.213984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263858 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.263923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264073 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264226 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264474 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.264539 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.264766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317424 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.317535 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.412634 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.420122 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.523931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524058 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.524187 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.561359 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 00:58:02.569340537 +0000 UTC Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.621342 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.621623 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:02 crc kubenswrapper[4985]: E0128 18:14:02.622044 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:10.622010491 +0000 UTC m=+61.448573352 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628159 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.628170 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.731906 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834880 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.834930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.938998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:02 crc kubenswrapper[4985]: I0128 18:14:02.939018 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:02Z","lastTransitionTime":"2026-01-28T18:14:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.041971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.042003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.042026 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145768 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145800 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.145823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.248871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.249910 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.353430 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456731 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.456847 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.503943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.512317 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.524998 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.542512 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.559639 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.561917 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 19:13:20.418214233 +0000 UTC Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.579683 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.593057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.611517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.626684 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.645993 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662699 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.662817 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.663942 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.678553 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.699725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.714347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.730385 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.753184 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.766122 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.769473 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.827762 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.845195 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:03Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.869866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:03 crc kubenswrapper[4985]: I0128 18:14:03.972710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:03Z","lastTransitionTime":"2026-01-28T18:14:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.076279 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.179426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263387 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.263400 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.264053 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264435 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264502 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.264566 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.282673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.283578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386298 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.386342 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.489945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.490067 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.562389 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 10:12:40.928330821 +0000 UTC Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.592999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.593028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.593045 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.696525 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.767782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768107 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.768549 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.784472 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789737 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.789781 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.807769 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.812931 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.829990 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836210 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.836373 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.853844 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.859647 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.872399 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:04Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:04 crc kubenswrapper[4985]: E0128 18:14:04.872537 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.874425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:04 crc kubenswrapper[4985]: I0128 18:14:04.977629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:04Z","lastTransitionTime":"2026-01-28T18:14:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.081867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.087706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.087905 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192313 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.192378 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.295137 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398690 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398785 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.398831 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.502159 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.562859 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 08:12:43.471683309 +0000 UTC Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605067 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605090 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.605152 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708521 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.708545 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.811243 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.914491 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:05Z","lastTransitionTime":"2026-01-28T18:14:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961175 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961516 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.961481294 +0000 UTC m=+88.788044145 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961623 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: I0128 18:14:05.961707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961728 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.96170208 +0000 UTC m=+88.788264931 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961833 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:05 crc kubenswrapper[4985]: E0128 18:14:05.961890 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:37.961877064 +0000 UTC m=+88.788439915 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.017487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.062588 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.062677 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062879 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062881 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062909 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062927 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062936 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.062945 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.063020 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:38.062995675 +0000 UTC m=+88.889558526 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.063065 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:38.063039676 +0000 UTC m=+88.889602537 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.120886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224394 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.224436 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263320 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.263330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263560 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263704 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263847 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:06 crc kubenswrapper[4985]: E0128 18:14:06.263968 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.327974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.328154 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.431983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.432014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.432036 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.535926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.535995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536057 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.536081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.564082 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 18:04:28.297588372 +0000 UTC Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.639838 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.743609 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.846982 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:06 crc kubenswrapper[4985]: I0128 18:14:06.950735 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:06Z","lastTransitionTime":"2026-01-28T18:14:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.054482 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.157669 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.261628 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.364716 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468221 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.468383 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.565032 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:57:55.99114387 +0000 UTC Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.572237 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691510 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.691529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.795483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:07 crc kubenswrapper[4985]: I0128 18:14:07.899361 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:07Z","lastTransitionTime":"2026-01-28T18:14:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002529 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.002556 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.105931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.106086 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209685 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.209916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.210163 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.263467 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263674 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263823 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.263961 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:08 crc kubenswrapper[4985]: E0128 18:14:08.264120 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.313779 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.416711 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520300 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.520429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.565774 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:57:57.972722402 +0000 UTC Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.623847 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728150 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.728200 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831384 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.831406 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.937888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.937994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:08 crc kubenswrapper[4985]: I0128 18:14:08.938104 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:08Z","lastTransitionTime":"2026-01-28T18:14:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.042502 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.145620 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248886 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.248922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.352382 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.455856 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559801 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.559949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.560082 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.566371 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 13:12:02.926958981 +0000 UTC Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.664678 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.767989 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.871787 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:09 crc kubenswrapper[4985]: I0128 18:14:09.974764 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:09Z","lastTransitionTime":"2026-01-28T18:14:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077795 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.077809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181425 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.181444 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263624 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263648 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.263856 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.263651 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264030 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264160 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.264388 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.265903 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.284892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.388866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492899 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492948 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.492966 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.566933 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 11:20:59.975101739 +0000 UTC Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.596930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.627054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.627286 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:10 crc kubenswrapper[4985]: E0128 18:14:10.627378 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:26.627353431 +0000 UTC m=+77.453916292 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.700664 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.770885 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.775413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.775988 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.801120 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.803481 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.823649 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.842576 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.863815 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.879308 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.893490 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906512 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.906653 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:10Z","lastTransitionTime":"2026-01-28T18:14:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.911186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.925183 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.940768 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.963766 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.981591 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:10 crc kubenswrapper[4985]: I0128 18:14:10.998031 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:10Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.009896 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.009999 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010054 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.010095 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.019836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.033752 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.045997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.060203 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.113631 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.219947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.220065 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.284916 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.297907 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.313778 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.322973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323012 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.323060 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.329284 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.344578 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.361902 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.377002 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.393655 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.406700 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.420657 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.425906 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.437151 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.452486 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.471578 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.481644 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.496535 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.509084 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.520057 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.528706 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.567178 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:50:49.07882908 +0000 UTC Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632755 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.632773 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.736478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.781091 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.782344 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/1.log" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786803 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" exitCode=1 Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786880 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.786949 4985 scope.go:117] "RemoveContainer" containerID="f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.788021 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:11 crc kubenswrapper[4985]: E0128 18:14:11.788570 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.806325 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.817865 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.830191 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.839827 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.839939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.840317 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.852410 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.869112 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.888053 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.905116 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.921302 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.935607 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.943520 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:11Z","lastTransitionTime":"2026-01-28T18:14:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.949912 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.961361 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.976980 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:11 crc kubenswrapper[4985]: I0128 18:14:11.993011 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:11Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.007773 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.022099 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.047467 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.052506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151173 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.151185 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.253979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.254162 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.262965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263012 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263035 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.263092 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263174 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263285 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263586 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.263846 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.357963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.358014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.358038 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.421178 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.441220 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.460997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.461028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.461054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.475039 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f2d3cefa0981c2625f6c807fb2e5d7da7d0ac31b3b3a39adbe6f8f521c089202\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"message\\\":\\\"62 6460 services_controller.go:444] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0128 18:13:55.722455 6460 obj_retry.go:409] Going to retry *v1.Pod resource setup for 14 objects: [openshift-dns/node-resolver-9xm27 openshift-machine-config-operator/machine-config-daemon-rmr8h openshift-multus/network-metrics-daemon-hrd6k openshift-network-node-identity/network-node-identity-vrzqb openshift-kube-apiserver/kube-apiserver-crc openshift-network-diagnostics/network-check-target-xd92c openshift-network-operator/iptables-alerter-4ln5h openshift-image-registry/node-ca-dlz95 openshift-multus/multus-g2g4k openshift-network-console/networking-console-plugin-85b44fc459-gdk6g openshift-multus/multus-additional-cni-plugins-6j9qp openshift-network-operator/network-operator-58b4c7f79c-55gtf openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5 openshift-ovn-kubernetes/ovnkube-node-zd8w7]\\\\nI0128 18:13:55.722471 6460 services_controller.go:445] Built service openshift-operator-lifecycle-manager/catalog-operator-metrics LB template configs for network=default: []services.lbConfig(nil)\\\\nF0128 18:13:55.722481 6460 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.493683 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.515748 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.534537 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.555126 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.564566 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.567748 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 15:18:51.374950063 +0000 UTC Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.578478 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.599736 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.623799 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.645410 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668036 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.668396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.690779 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.706690 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.726445 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.741793 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.756565 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.769936 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.771870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.802136 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.812884 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:12 crc kubenswrapper[4985]: E0128 18:14:12.813204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.836197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.849502 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.861626 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.874532 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.876147 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.890595 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.900957 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.919282 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.942536 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977229 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.977241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:12Z","lastTransitionTime":"2026-01-28T18:14:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.984530 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:12 crc kubenswrapper[4985]: I0128 18:14:12.996844 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:12Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.016186 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.025562 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.039407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.058577 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.074781 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079573 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.079660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.091911 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.108374 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:13Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.182995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.183055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.183079 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.286918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.390997 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.391013 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494790 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.494871 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.568437 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 02:54:29.185243342 +0000 UTC Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.598499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.701783 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805118 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805170 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.805192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908271 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908331 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:13 crc kubenswrapper[4985]: I0128 18:14:13.908390 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:13Z","lastTransitionTime":"2026-01-28T18:14:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.011507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.115318 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218825 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.218860 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263225 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.263239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263291 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263440 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263600 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.263747 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321728 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.321776 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.424195 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.527457 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.569638 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:50:39.421002 +0000 UTC Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630395 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.630407 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.732981 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.835989 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.903910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.903981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.904048 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.923589 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928721 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.928780 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.949397 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.955871 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:14 crc kubenswrapper[4985]: E0128 18:14:14.976485 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:14 crc kubenswrapper[4985]: I0128 18:14:14.981464 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:14Z","lastTransitionTime":"2026-01-28T18:14:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.000416 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:14Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.005572 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.021744 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:15Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:15Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:15 crc kubenswrapper[4985]: E0128 18:14:15.021893 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.023901 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.127794 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.230978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.231091 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.333947 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.436409 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.539685 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.570924 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:24:08.274059605 +0000 UTC Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.642478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.745977 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.746054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849354 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.849506 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953921 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:15 crc kubenswrapper[4985]: I0128 18:14:15.953943 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:15Z","lastTransitionTime":"2026-01-28T18:14:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.057481 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160714 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.160730 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.263507 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.263779 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.263992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.264061 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.264239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264284 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264548 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:16 crc kubenswrapper[4985]: E0128 18:14:16.264721 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.266866 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.370132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.473178 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.571896 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 03:45:07.291829076 +0000 UTC Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576679 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576747 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.576759 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.684592 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787962 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.787988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.788024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.788055 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.891426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.994939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.995097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:16 crc kubenswrapper[4985]: I0128 18:14:16.995240 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:16Z","lastTransitionTime":"2026-01-28T18:14:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.099157 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.100692 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.203880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306834 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.306913 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.410626 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513713 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513818 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.513879 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.572089 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 18:15:40.297290557 +0000 UTC Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.616843 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.720191 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.823205 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:17 crc kubenswrapper[4985]: I0128 18:14:17.926433 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:17Z","lastTransitionTime":"2026-01-28T18:14:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.030236 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132856 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132874 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.132887 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.235938 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.263360 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263443 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263589 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263670 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:18 crc kubenswrapper[4985]: E0128 18:14:18.263709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.338400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441314 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.441330 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543779 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543892 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.543925 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.572942 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 01:03:26.818602208 +0000 UTC Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647739 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.647767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.751131 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.859703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962632 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:18 crc kubenswrapper[4985]: I0128 18:14:18.962718 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:18Z","lastTransitionTime":"2026-01-28T18:14:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.065907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169947 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.169963 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.271917 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.374934 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.477441 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.573125 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:31:52.273059548 +0000 UTC Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.580734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683628 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.683675 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786035 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.786105 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.889452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:19 crc kubenswrapper[4985]: I0128 18:14:19.992996 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:19Z","lastTransitionTime":"2026-01-28T18:14:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095618 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.095650 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.198521 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263590 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.263730 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.263946 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.263984 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.264000 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.264220 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:20 crc kubenswrapper[4985]: E0128 18:14:20.264327 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300806 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.300892 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.403304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.508529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.573767 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 14:25:49.075299492 +0000 UTC Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610757 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.610786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713807 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.713823 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816751 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.816840 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919824 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:20 crc kubenswrapper[4985]: I0128 18:14:20.919859 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:20Z","lastTransitionTime":"2026-01-28T18:14:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.021984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.022003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.022018 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124772 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124782 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.124818 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.227832 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.279835 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.294312 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.316292 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.328827 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.330941 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.348604 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.359651 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.374441 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.389786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.403488 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.423366 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.435992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.436008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.436019 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.438108 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.449180 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.463802 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.477368 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.493188 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.507004 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.520813 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:21Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.539348 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.574976 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 13:41:57.52558498 +0000 UTC Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.643338 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.745920 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848230 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848514 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.848587 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:21 crc kubenswrapper[4985]: I0128 18:14:21.952795 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:21Z","lastTransitionTime":"2026-01-28T18:14:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.056939 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160146 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.160171 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.262651 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263274 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.263115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263125 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263353 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:22 crc kubenswrapper[4985]: E0128 18:14:22.263207 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.365110 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467793 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.467827 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.570887 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.575902 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 00:10:52.033356813 +0000 UTC Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.673452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.775919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776065 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776084 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.776114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879357 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.879371 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:22 crc kubenswrapper[4985]: I0128 18:14:22.982234 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:22Z","lastTransitionTime":"2026-01-28T18:14:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.084961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085047 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.085058 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188196 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.188224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291167 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.291278 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.394573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.497548 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.576018 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 11:09:04.040155691 +0000 UTC Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.600951 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.703597 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.806192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:23 crc kubenswrapper[4985]: I0128 18:14:23.909571 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:23Z","lastTransitionTime":"2026-01-28T18:14:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.012589 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115588 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.115689 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.218526 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263697 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263826 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.263855 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.263997 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264069 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:24 crc kubenswrapper[4985]: E0128 18:14:24.264286 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322013 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322075 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322111 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.322123 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.424848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.527418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.577079 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 15:39:49.85809525 +0000 UTC Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.629900 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733777 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.733888 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836818 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.836996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.837063 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:24 crc kubenswrapper[4985]: I0128 18:14:24.940650 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:24Z","lastTransitionTime":"2026-01-28T18:14:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042932 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.042974 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.145916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.146006 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.146101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.248703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309823 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309837 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.309848 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.322424 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.326978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.327062 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.342383 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349430 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.349447 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.363462 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.368445 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.380628 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.384862 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.397550 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:25Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:25Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:25 crc kubenswrapper[4985]: E0128 18:14:25.397704 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399741 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.399790 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503155 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503479 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503550 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.503700 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.578108 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 22:58:49.586576544 +0000 UTC Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.606737 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710759 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.710841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.711209 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.813922 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:25 crc kubenswrapper[4985]: I0128 18:14:25.917175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:25Z","lastTransitionTime":"2026-01-28T18:14:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.019899 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.122960 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225648 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.225676 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.265336 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.265694 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266184 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266533 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.266741 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.266848 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.269607 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.269857 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328910 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328941 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.328964 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431694 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431718 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.431728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.535912 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.535992 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.536073 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.579154 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 22:03:06.315057537 +0000 UTC Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.639372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.695987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.696152 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:26 crc kubenswrapper[4985]: E0128 18:14:26.696218 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:14:58.696200922 +0000 UTC m=+109.522763733 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.743517 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846826 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.846841 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:26 crc kubenswrapper[4985]: I0128 18:14:26.949419 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:26Z","lastTransitionTime":"2026-01-28T18:14:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.053196 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.156610 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259898 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.259995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.260009 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.362705 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.465767 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.568825 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.579468 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 13:43:51.32191099 +0000 UTC Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.671425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775131 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.775177 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878381 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.878396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981649 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:27 crc kubenswrapper[4985]: I0128 18:14:27.981668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:27Z","lastTransitionTime":"2026-01-28T18:14:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.084595 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.188597 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263237 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263341 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.263371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.263695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264206 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264339 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:28 crc kubenswrapper[4985]: E0128 18:14:28.264460 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292501 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292530 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.292554 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399584 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399602 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.399619 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502725 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502776 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502813 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.502826 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.580011 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 07:18:51.911022586 +0000 UTC Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.605861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606458 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.606877 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.709974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710029 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.710102 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812931 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.812995 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.813064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.813133 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.916917 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.916982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917002 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:28 crc kubenswrapper[4985]: I0128 18:14:28.917047 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:28Z","lastTransitionTime":"2026-01-28T18:14:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021833 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021853 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.021945 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.124935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.125055 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.228722 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.332226 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.436238 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.540688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541413 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.541471 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.581188 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 18:18:30.071064521 +0000 UTC Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645153 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.645350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748900 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.748977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852697 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.852711 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:29 crc kubenswrapper[4985]: I0128 18:14:29.956648 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:29Z","lastTransitionTime":"2026-01-28T18:14:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059625 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059681 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.059710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.162970 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.163089 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.263901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.264509 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.264692 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.265050 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:30 crc kubenswrapper[4985]: E0128 18:14:30.265132 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267309 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267345 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.267358 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.370400 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474206 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.474425 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.578536 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.581872 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 12:43:08.54746056 +0000 UTC Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.681629 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785329 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.785437 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880073 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" exitCode=1 Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.880668 4985 scope.go:117] "RemoveContainer" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889396 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.889499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:30Z","lastTransitionTime":"2026-01-28T18:14:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.897668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.918392 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.929716 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.943637 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.956356 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.969801 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.982198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:30 crc kubenswrapper[4985]: I0128 18:14:30.995328 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:30Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000461 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.000549 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.011786 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.024836 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.041658 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.055997 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.067919 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.095835 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.104929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.108875 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.121196 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.133049 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.207692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.207996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208132 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208218 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.208307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.278447 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.294399 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.310801 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.311625 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.324075 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.341266 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.354580 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.367169 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.387431 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.399024 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.410955 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414404 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.414483 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.423500 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.435570 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.449334 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.460877 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.479954 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.491347 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.502521 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517784 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517844 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.517891 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.582417 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 05:21:47.198515082 +0000 UTC Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.620935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621001 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621022 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621051 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.621076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724571 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.724611 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.828326 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.894109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.894641 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.915917 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932738 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.932872 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:31Z","lastTransitionTime":"2026-01-28T18:14:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.934883 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.951353 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.966130 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.983643 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:31 crc kubenswrapper[4985]: I0128 18:14:31.998757 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:31Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.016609 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035935 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035914 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.035984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.058668 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.074293 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.092523 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.107664 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.122579 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.138273 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139651 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.139844 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.151823 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.170313 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.182146 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:32Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242750 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242778 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.242788 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263267 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263197 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263373 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263471 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263539 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.263721 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:32 crc kubenswrapper[4985]: E0128 18:14:32.263958 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345464 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345507 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.345524 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449537 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.449547 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.552671 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.583291 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 13:45:58.875266737 +0000 UTC Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.656674 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760677 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760758 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.760833 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863631 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.863644 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:32 crc kubenswrapper[4985]: I0128 18:14:32.967517 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:32Z","lastTransitionTime":"2026-01-28T18:14:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069922 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.069968 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.173991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174052 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.174108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276544 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.276601 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379808 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.379962 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483021 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483128 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483177 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.483197 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.584086 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:41:08.635826757 +0000 UTC Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586816 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586841 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.586858 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690281 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690346 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.690375 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.799440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:33 crc kubenswrapper[4985]: I0128 18:14:33.902418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:33Z","lastTransitionTime":"2026-01-28T18:14:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.006429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109347 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109418 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.109455 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.211971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.212143 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.263685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.263685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.264334 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.264535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264843 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264859 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264561 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:34 crc kubenswrapper[4985]: E0128 18:14:34.264931 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315692 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.315755 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419403 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419434 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.419459 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523160 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.523307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.584827 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 01:08:42.684486952 +0000 UTC Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.626507 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729600 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729672 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.729710 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833471 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.833509 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936586 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936663 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:34 crc kubenswrapper[4985]: I0128 18:14:34.936721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:34Z","lastTransitionTime":"2026-01-28T18:14:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040030 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.040165 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143743 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143788 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.143807 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247328 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247352 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.247413 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.286726 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351666 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.351684 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401653 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401673 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.401684 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.416335 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422655 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.422671 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.437789 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443351 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.443426 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.463564 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471630 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.471749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.485987 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492187 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492267 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.492404 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.506095 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:35Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:35 crc kubenswrapper[4985]: E0128 18:14:35.506493 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.508708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.585656 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:56:41.060752651 +0000 UTC Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.612474 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715825 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715925 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.715935 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819508 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.819623 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922093 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:35 crc kubenswrapper[4985]: I0128 18:14:35.922136 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:35Z","lastTransitionTime":"2026-01-28T18:14:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025274 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.025369 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127815 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127827 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.127860 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230963 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.230991 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.231014 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263007 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263102 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263022 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263207 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.263149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263426 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263583 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:36 crc kubenswrapper[4985]: E0128 18:14:36.263705 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334361 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.334391 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437849 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.437902 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.541534 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.586044 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:56:21.822497904 +0000 UTC Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644446 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644552 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.644570 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.748439 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.851332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:36 crc kubenswrapper[4985]: I0128 18:14:36.954357 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:36Z","lastTransitionTime":"2026-01-28T18:14:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058087 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.058135 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.161903 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.264341 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265621 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.265668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368882 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.368923 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472288 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472322 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.472339 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574864 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574928 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.574984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.586966 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 13:14:25.985471723 +0000 UTC Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678829 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678908 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.678918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781700 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.781733 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884522 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884574 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.884617 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.918376 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.920813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.921323 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.937798 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.952633 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.966893 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.978515 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987378 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987411 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987441 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.987454 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:37Z","lastTransitionTime":"2026-01-28T18:14:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:37 crc kubenswrapper[4985]: I0128 18:14:37.994919 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:37Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.005886 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.029370 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.041412 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.047833 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.047809013 +0000 UTC m=+152.874371834 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.047951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.047988 4985 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048045 4985 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048107 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.048076251 +0000 UTC m=+152.874639082 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.048136 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.048122023 +0000 UTC m=+152.874684854 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.054198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.068557 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.083767 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090455 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090484 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.090499 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.099547 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.112229 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.135020 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.148836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.148882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149017 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149034 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149048 4985 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149093 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.149075209 +0000 UTC m=+152.975638030 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149251 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149290 4985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149298 4985 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.149321 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.149313826 +0000 UTC m=+152.975876647 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196094 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.196238 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.198787 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.217197 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.230859 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.246331 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264698 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.264645 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.264832 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.264946 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.265015 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.265062 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.265105 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298909 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298946 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.298970 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401753 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401794 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.401812 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.505993 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.506117 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.588012 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 12:40:38.543634065 +0000 UTC Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610528 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.610608 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713405 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713452 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.713475 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816708 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816765 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.816786 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919644 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919703 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.919763 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:38Z","lastTransitionTime":"2026-01-28T18:14:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.927802 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.928839 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/2.log" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933494 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" exitCode=1 Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933565 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc"} Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.933638 4985 scope.go:117] "RemoveContainer" containerID="14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.934712 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:38 crc kubenswrapper[4985]: E0128 18:14:38.935044 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.960675 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:38 crc kubenswrapper[4985]: I0128 18:14:38.980833 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:38Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.005157 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022769 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022851 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022865 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.022903 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.027429 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.049918 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.070079 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.094555 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.118517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.125933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.126107 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.140363 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.154725 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.172782 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.191061 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://14f49b4db69d902d095c0fb7b036c0993cb792207732c8bed43597c915bf9d82\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:11Z\\\",\\\"message\\\":\\\"77 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:11.241378 6677 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:11.241414 6677 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:11.241432 6677 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 18:14:11.241440 6677 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 18:14:11.241469 6677 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0128 18:14:11.241506 6677 factory.go:656] Stopping watch factory\\\\nI0128 18:14:11.241523 6677 ovnkube.go:599] Stopped ovnkube\\\\nI0128 18:14:11.241568 6677 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:11.241585 6677 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 18:14:11.241599 6677 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:11.241610 6677 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 18:14:11.241620 6677 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 18:14:11.241630 6677 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 18:14:11.241643 6677 metrics.go:553] Stopping metrics server at address \\\\\\\"127.0.0.1:29103\\\\\\\"\\\\nF0128 18:14:11.241732 6677 ovnkube.go:137] failed to run ovnkube: [failed to start network cont\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:10Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.203645 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.218866 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.228986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.229053 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.232013 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.241699 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.254033 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.264724 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332233 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.332350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435569 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435635 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435658 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.435675 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539591 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.539625 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.588736 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 19:39:18.166322547 +0000 UTC Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642371 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642506 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.642573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.746456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.746871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747039 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747255 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.747461 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851894 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.851937 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.939941 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.945341 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:39 crc kubenswrapper[4985]: E0128 18:14:39.945647 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954780 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954830 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954850 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.954900 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:39Z","lastTransitionTime":"2026-01-28T18:14:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.967343 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:39 crc kubenswrapper[4985]: I0128 18:14:39.986141 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:39Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.008449 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.021949 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.040198 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.056660 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057570 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057622 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057637 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.057669 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.071621 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.086236 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.108335 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.129252 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.145188 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.161438 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.162943 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.182559 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.199880 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.227642 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.243707 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.254796 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263209 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263347 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263395 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263415 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263514 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263601 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:40 crc kubenswrapper[4985]: E0128 18:14:40.263713 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263836 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.263883 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.270706 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:40Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367252 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367431 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.367473 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470369 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470382 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.470416 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573059 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573138 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573161 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.573175 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.589935 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 06:30:54.09569034 +0000 UTC Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676599 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676633 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.676660 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.779452 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882427 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882504 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.882547 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987250 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987359 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:40 crc kubenswrapper[4985]: I0128 18:14:40.987370 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:40Z","lastTransitionTime":"2026-01-28T18:14:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090482 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090548 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090566 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.090578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.193534 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.280926 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296821 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296934 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296964 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.296979 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.305813 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.325348 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.340520 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.356166 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.371337 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.388959 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399809 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399843 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.399867 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.411476 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.430981 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.463346 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.478517 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.496972 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.504349 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.515910 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.530072 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.542483 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.554068 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.570804 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.584133 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:41Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.590137 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 09:10:29.835759676 +0000 UTC Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607164 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607180 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.607222 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709225 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709272 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.709286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812712 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812770 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812783 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812799 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.812809 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.915976 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:41 crc kubenswrapper[4985]: I0128 18:14:41.916076 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:41Z","lastTransitionTime":"2026-01-28T18:14:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019497 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019523 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.019582 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122842 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.122945 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226166 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.226179 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263274 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.263306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263467 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:42 crc kubenswrapper[4985]: E0128 18:14:42.263853 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328860 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.328889 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432614 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432641 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.432654 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.535996 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536097 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.536108 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.591187 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 23:01:24.823490125 +0000 UTC Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.639251 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741760 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.741820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.845532 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949410 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:42 crc kubenswrapper[4985]: I0128 18:14:42.949672 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:42Z","lastTransitionTime":"2026-01-28T18:14:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.052905 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053030 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053050 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053078 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.053101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156053 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156113 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156125 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.156157 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259436 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259485 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.259503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362581 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362613 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.362651 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465327 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.465354 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568302 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.568347 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.592389 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 12:14:31.341134407 +0000 UTC Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671363 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671387 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.671404 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774074 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774086 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774102 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.774114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.876433 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979046 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979055 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:43 crc kubenswrapper[4985]: I0128 18:14:43.979078 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:43Z","lastTransitionTime":"2026-01-28T18:14:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082227 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082330 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082356 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.082372 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186124 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186211 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.186224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264506 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264556 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264372 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264708 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.264897 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.264988 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:44 crc kubenswrapper[4985]: E0128 18:14:44.265379 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.289396 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392438 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392512 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.392531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.393043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.393101 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.497387 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.592860 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 10:20:32.212325996 +0000 UTC Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.600565 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704756 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.704816 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808395 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808478 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808541 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.808593 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912362 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912409 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:44 crc kubenswrapper[4985]: I0128 18:14:44.912427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:44Z","lastTransitionTime":"2026-01-28T18:14:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015775 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.015807 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.119578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223212 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.223310 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325924 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.325958 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.429943 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430025 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430043 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430071 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.430094 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.533984 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.593414 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 09:00:49.208729713 +0000 UTC Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637687 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637749 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637771 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.637815 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740608 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740686 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.740729 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743858 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743956 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.743982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.744001 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.765862 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.771967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.772017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.772042 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.795871 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801796 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801868 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801926 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.801950 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.822206 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827126 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827144 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827172 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.827189 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.848517 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856342 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856456 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856491 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.856514 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.876961 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:45Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:45 crc kubenswrapper[4985]: E0128 18:14:45.877134 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880527 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.880542 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.982978 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983016 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983027 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:45 crc kubenswrapper[4985]: I0128 18:14:45.983056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:45Z","lastTransitionTime":"2026-01-28T18:14:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.086893 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087534 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087669 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.087791 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191089 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191450 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.191749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263955 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.263970 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264284 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.264446 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264586 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:46 crc kubenswrapper[4985]: E0128 18:14:46.264441 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294465 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.294574 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398000 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398734 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.398937 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.399107 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503846 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.503870 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.593836 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:20:32.871072433 +0000 UTC Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.607704 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.607949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608079 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608215 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.608393 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712400 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712432 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.712456 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816171 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816237 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.816298 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.919797 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920744 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:46 crc kubenswrapper[4985]: I0128 18:14:46.920959 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:46Z","lastTransitionTime":"2026-01-28T18:14:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024674 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.024734 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127473 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.127521 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231023 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231135 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.231153 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.334716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335294 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335730 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.335918 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.439695 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440561 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.440756 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543722 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543740 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.543755 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.594537 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 12:07:11.387238924 +0000 UTC Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649716 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.649937 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.752972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753151 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.753446 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856293 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856310 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.856322 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959028 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959105 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959129 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:47 crc kubenswrapper[4985]: I0128 18:14:47.959146 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:47Z","lastTransitionTime":"2026-01-28T18:14:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.063304 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167742 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167820 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167845 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.167864 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263366 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263490 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263534 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264045 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264186 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264355 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:48 crc kubenswrapper[4985]: E0128 18:14:48.264457 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271132 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271169 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.271224 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.279758 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.374919 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.374979 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.375054 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478020 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478136 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.478155 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581971 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.581984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.582003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.582016 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.595112 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 07:35:46.724707907 +0000 UTC Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685454 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685463 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685477 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.685487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787727 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787787 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.787797 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891594 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891627 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.891640 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994286 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994348 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:48 crc kubenswrapper[4985]: I0128 18:14:48.994408 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:48Z","lastTransitionTime":"2026-01-28T18:14:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097445 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097469 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.097487 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200981 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.200998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.201011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303560 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.303655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407360 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407448 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.407463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511082 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511184 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511202 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511228 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.511286 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.596163 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 00:27:35.095159805 +0000 UTC Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615318 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615429 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615483 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.615505 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719276 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719291 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.719328 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822873 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822929 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822945 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822967 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.822990 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:49 crc kubenswrapper[4985]: I0128 18:14:49.927445 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:49Z","lastTransitionTime":"2026-01-28T18:14:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031100 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031149 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031182 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.031192 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134385 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134472 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134526 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.134548 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238077 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238190 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.238209 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.263947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.263998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.264122 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.264222 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264226 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264472 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:50 crc kubenswrapper[4985]: E0128 18:14:50.264639 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341338 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341415 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.341429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444766 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444835 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.444876 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547774 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.547804 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.597318 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 02:48:50.343595964 +0000 UTC Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651575 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.651626 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754665 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754682 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.754715 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.755011 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858524 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.858551 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962368 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:50 crc kubenswrapper[4985]: I0128 18:14:50.962562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:50Z","lastTransitionTime":"2026-01-28T18:14:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.066709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067207 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067315 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.067503 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171064 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171158 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171183 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.171203 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274691 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274709 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274732 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.274749 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.287407 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.307181 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.331357 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.348397 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.365994 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378401 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378440 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.378463 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.382549 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.396320 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.429825 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.448696 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be38081c-43d9-4241-aea1-a14fb312a0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.471468 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482832 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482914 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482939 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.482997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.495419 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.520398 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.542430 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.564880 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.591861 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.591983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592003 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592033 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.592052 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.598111 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 02:46:24.918812857 +0000 UTC Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.609190 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.642506 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.657093 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.671790 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.688953 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:51Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694773 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694817 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.694837 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.797974 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798017 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798041 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.798051 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901340 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901374 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:51 crc kubenswrapper[4985]: I0128 18:14:51.901397 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:51Z","lastTransitionTime":"2026-01-28T18:14:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005383 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005408 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.005427 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108606 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.108655 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211069 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211509 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.211812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.212008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.212193 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263431 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.263459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263642 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263763 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263841 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:52 crc kubenswrapper[4985]: E0128 18:14:52.263936 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315611 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315639 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.315705 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420510 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420531 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.420578 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523890 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523936 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523950 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.523959 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.599124 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:17:17.146800271 +0000 UTC Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627502 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627559 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627576 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.627594 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731376 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731416 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.731442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.833973 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834015 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834044 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.834056 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936902 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936923 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936954 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:52 crc kubenswrapper[4985]: I0128 18:14:52.936977 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:52Z","lastTransitionTime":"2026-01-28T18:14:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039660 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039683 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.039702 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142881 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.142905 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247061 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247142 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247194 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.247214 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350320 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350468 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.350498 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454130 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.454207 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.556989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.557005 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.599999 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:03:23.726794769 +0000 UTC Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660414 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660513 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.660562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763048 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763101 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763119 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763154 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.763193 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866583 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866652 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.866708 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.968953 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.968998 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969008 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:53 crc kubenswrapper[4985]: I0128 18:14:53.969040 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:53Z","lastTransitionTime":"2026-01-28T18:14:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072242 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072308 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072337 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.072350 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176137 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176201 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176295 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176319 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.176332 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.263749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.263894 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264028 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.264085 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.264199 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264371 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.264459 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.265142 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.265144 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:14:54 crc kubenswrapper[4985]: E0128 18:14:54.265540 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278811 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278822 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.278855 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383443 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.383488 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486879 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486896 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.486909 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590812 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590930 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.590948 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.601058 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 18:30:08.969259552 +0000 UTC Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694305 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.694343 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798220 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798317 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798373 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.798395 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902170 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902199 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:54 crc kubenswrapper[4985]: I0128 18:14:54.902223 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:54Z","lastTransitionTime":"2026-01-28T18:14:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005870 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.005985 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.006007 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109540 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.109587 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213332 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213402 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213420 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213444 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.213462 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316585 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316680 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316707 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.316728 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419745 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419767 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419798 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.419822 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.522944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523024 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523045 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.523134 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.601447 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 08:18:54.760095322 +0000 UTC Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626762 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626855 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.626874 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730435 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730459 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730597 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.730732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834518 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.834576 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938367 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938451 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938494 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:55 crc kubenswrapper[4985]: I0128 18:14:55.938510 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:55Z","lastTransitionTime":"2026-01-28T18:14:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040284 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.040303 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.064525 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069334 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069399 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069417 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.069429 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.088444 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137412 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137620 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137693 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137776 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.137850 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.152690 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158372 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158386 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158406 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.158418 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.171186 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175224 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175282 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.175298 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.188712 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ef51598b-c07a-479e-807b-3fca14f8607d\\\",\\\"systemUUID\\\":\\\"a73758a0-c5e5-4e2e-bacd-4099da9969a4\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:14:56Z is after 2025-08-24T17:21:41Z" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.188866 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190659 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190688 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190698 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190719 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.190732 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263822 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263841 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.263930 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264201 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264361 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264470 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:56 crc kubenswrapper[4985]: E0128 18:14:56.264644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294623 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294661 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294671 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294689 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.294703 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397567 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397626 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.397668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501426 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501487 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501511 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501546 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.501568 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.602651 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 10:11:51.028489999 +0000 UTC Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605475 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605539 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605592 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.605613 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709292 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.709335 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812535 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812582 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.812605 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916710 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916789 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916854 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:56 crc kubenswrapper[4985]: I0128 18:14:56.916880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:56Z","lastTransitionTime":"2026-01-28T18:14:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020736 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020840 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020869 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.020894 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124036 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124122 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.124140 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227802 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227859 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227875 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227897 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.227912 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331336 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331392 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.331415 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435358 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435442 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435474 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.435497 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539120 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539223 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.539243 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.602997 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 13:27:20.366250282 +0000 UTC Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642563 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642650 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642675 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.642724 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745763 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745906 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.745928 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853066 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853163 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.853181 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956178 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956235 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956268 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956289 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:57 crc kubenswrapper[4985]: I0128 18:14:57.956301 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:57Z","lastTransitionTime":"2026-01-28T18:14:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.063986 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064136 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064192 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.064213 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168306 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168421 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168437 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.168478 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263747 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263891 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.263897 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264051 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264246 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264360 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.264389 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.264464 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271983 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.271994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.272009 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.272022 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375081 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375096 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375152 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.375168 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.477918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.477994 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478010 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478032 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.478047 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580553 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.580691 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.603914 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 07:51:11.94536959 +0000 UTC Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686668 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686684 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.686721 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.730063 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.730271 4985 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:58 crc kubenswrapper[4985]: E0128 18:14:58.730369 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs podName:e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0 nodeName:}" failed. No retries permitted until 2026-01-28 18:16:02.730341117 +0000 UTC m=+173.556903948 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs") pod "network-metrics-daemon-hrd6k" (UID: "e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789819 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789878 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789887 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.789919 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893604 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893634 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.893656 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996517 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996568 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996610 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:58 crc kubenswrapper[4985]: I0128 18:14:58.996622 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:58Z","lastTransitionTime":"2026-01-28T18:14:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100238 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100335 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100377 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.100402 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204134 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204200 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204219 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.204231 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307578 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307733 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307761 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.307820 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411316 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411398 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.411440 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515246 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515370 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515389 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.515439 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.604609 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 18:57:17.314837261 +0000 UTC Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618393 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618495 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618520 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.618577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722143 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722339 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722380 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.722422 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825204 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825232 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.825290 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928867 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928927 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.928982 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:14:59 crc kubenswrapper[4985]: I0128 18:14:59.929118 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:14:59Z","lastTransitionTime":"2026-01-28T18:14:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031619 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031636 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031656 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.031672 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140884 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140944 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140961 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.140989 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.141114 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255222 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255333 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.255357 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.263489 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263626 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263700 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.263880 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:00 crc kubenswrapper[4985]: E0128 18:15:00.264202 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357423 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357500 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357519 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357549 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.357573 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460493 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460536 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460547 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.460577 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563791 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563876 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563888 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.563929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.605040 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 22:19:32.878200761 +0000 UTC Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666555 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666624 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.666657 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769557 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769603 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.769627 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872792 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872862 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872885 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.872930 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976139 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976236 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976303 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976341 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:00 crc kubenswrapper[4985]: I0128 18:15:00.976365 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:00Z","lastTransitionTime":"2026-01-28T18:15:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.079515 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.079968 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080063 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.080241 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183554 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183601 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183617 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183646 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.183668 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.282463 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://6dae99ce28b8be66a70ea002cf4b9047eada69fa2813f63cf1ac25d209326c77\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286109 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286188 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286213 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286245 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.286368 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.300630 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.321607 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"82fb0eec-adf5-4743-979d-6b7bf729e4f5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9a485196b85ef12555b3c5f2f34b401e959beb752088880d05f17ce84a978a41\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5698e4d6d3eef3340b6b0a918b16eddb5b799385d5ae6b50e2f189eff0e5bc73\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://42798f11ac92aa3e07e2379b0e873537ba5a833d3cdad404398dbe8d13ded540\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://07cc101744c41eefc14ed59132b80356180d200ccaec121f829d71932c5c91b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://31c3ef60e02ab2bf5f648686e68cc03529c32649585e5df09dd9383827b46eee\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:45Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://27008cea06f1ca0ee795d9761b2ad938105148c24c1316e881694c578a3e27eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:47Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1af697580cd62e6973a770573da8115bb4aa1098ad5628b56a2fc0fe19b92cf2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:49Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:48Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-qj2r9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-6j9qp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.336450 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ba791a5a-08bb-4a97-a4e4-9b0e06bac324\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:42Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e004520b3b40ac3881a4f8b78e34bc4235139f14f5804320be7697beea689aa5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsgxm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-rmr8h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.352750 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-g2g4k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:30Z\\\",\\\"message\\\":\\\"2026-01-28T18:13:44+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08\\\\n2026-01-28T18:13:44+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c4b0bfa3-6cf0-4d1e-a9b9-9dc343160a08 to /host/opt/cni/bin/\\\\n2026-01-28T18:13:45Z [verbose] multus-daemon started\\\\n2026-01-28T18:13:45Z [verbose] Readiness Indicator file check\\\\n2026-01-28T18:14:30Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:40Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:14:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xhcbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g2g4k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.368593 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:54Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ql6nz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:54Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-hrd6k\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389848 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389891 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389901 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389918 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.389929 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.393669 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70cf33cd-1921-458e-ba4d-2a9dcd994c98\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:30Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f44ee5e056016d5b371787625e7ba1d6a759acacfdb13ca43af2937ca1c6cb7e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6d4534a99f621904c66f633c242dbe66d6522ee2668ee44985126b7e07ee4b9b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05fada25f77e583e986fc8ae47217e4ffc2191fb24fdbe1d7528c512ddce71c8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d4b15aae726dd7880c717d6d1dc56ace05f73be487cba796379028df3328c34e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f56e0261d9edab4a1ef4ec077f193b5436f4cd5ba027517edc70725a997158e6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ed729bef2da368e64f8143f3932058a83c8629ae5c061807242999839a2219d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://166b7e4b8535b4969b8cdce7fef6d6f296b5c8c214b149fc066c8e2842164d07\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f55ec0734c8f4e342d1cb2463243ffdcca1a9b089d4a82bbbec61a55c7fdf8d5\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.405581 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"be38081c-43d9-4241-aea1-a14fb312a0a4\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://83f697b1c16bcd1e36101e6b455b45641dbffe1cbf333e78f6a61de9228652f5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b67bc07dc45b6a6e977056c19d50bc4d8bee92234b25b1f67975101c4a295d85\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.424117 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"075a1687-5c8c-442a-b2d8-b2548b360416\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:17Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f1d420ee01c3ab02411251fca54e7beb176858d7ab79fd7065c4819c07612866\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://0746acb474dd14e00f0b4ba36f6565df0d9118e281f35cdb497aea4b0791cac3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aac490bfe761ad0443d1da72cdde4878a96666117a3013ec8db8e1d4f4c8b23b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.440195 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2e6948ca-6631-4bb7-9ec8-54f8429191e5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://915513458c185bd7aca82178dd7b61a8d33e1f61c996395007500402efab5871\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0506f9cd5876fd30cff8a826e3fdd622f81853c7720df0827ae474d7d30dfdd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c86916e7d4d8aab36b9903a675ee45939a638c31fc204b4ad39b1aeaf10a4945\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.458695 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:36Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://01be137e8a6d443ef0629ec12fa8d6c81fd870cdf25769fe1161586ea52bb832\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.475420 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493205 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493269 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493280 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493299 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493310 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.493597 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.507498 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:38Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce6d2771b99efc38c783da94cf6c9a62ae60ca46b474ba0ac1a0efa1ee6d1386\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.526861 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bd7b8cde-d2fe-4842-857e-545172f5bd12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T18:14:38Z\\\",\\\"message\\\":\\\"ormers/factory.go:160\\\\nI0128 18:14:38.290397 7118 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.290712 7118 reflector.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291004 7118 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 18:14:38.291093 7118 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291160 7118 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 18:14:38.291886 7118 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 18:14:38.291926 7118 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 18:14:38.291950 7118 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 18:14:38.291961 7118 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 18:14:38.291984 7118 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 18:14:38.291990 7118 factory.go:656] Stopping watch factory\\\\nI0128 18:14:38.292005 7118 handler.go:208] Removed *v1.Namespace ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:14:37Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ktbbd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-zd8w7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.538873 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-dlz95" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc08b2fa-f391-4427-b450-d72953c4056b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a38018887090f536b5e48de99ab4ad99be2c214893b40dc1687a283b2381129\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lrg9g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:43Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-dlz95\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.556476 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:14:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T18:13:34Z\\\",\\\"message\\\":\\\"pace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 18:13:20.217055 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 18:13:20.217807 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-1554065543/tls.crt::/tmp/serving-cert-1554065543/tls.key\\\\\\\"\\\\nI0128 18:13:33.867839 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0128 18:13:33.896620 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0128 18:13:33.896654 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0128 18:13:33.896685 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0128 18:13:33.896693 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0128 18:13:33.919891 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0128 18:13:33.920042 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920076 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0128 18:13:33.920108 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0128 18:13:33.920138 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0128 18:13:33.920167 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0128 18:13:33.920203 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0128 18:13:33.920532 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0128 18:13:33.923844 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:19Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:17Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T18:13:14Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T18:13:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:11Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.570228 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9xm27" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"1301b014-a9ed-4b29-8dc2-86c01d6bd13a\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b490bda99225d0d6b461560e2c41fff23c1399b0a82b980d04a3e8daeee12fc6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xz4mz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:40Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9xm27\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.584074 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"300be08e-8565-45ad-a77e-ac1b90ff61e7\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T18:13:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://5d223e85ba7451a1b77e58dcd6a7cecde36333ff08aa4c498acc3703fca0e605\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c4fbd8f1565f77c3e4da368f06371058c86b48262b9c414877a7bdaeb7c4394d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:13:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dfjql\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T18:13:53Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-xvwg5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T18:15:01Z is after 2025-08-24T17:21:41Z" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595543 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595607 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595638 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.595652 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.606043 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 05:18:32.893060577 +0000 UTC Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698073 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698147 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698176 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.698199 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.800988 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801049 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801068 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801092 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.801110 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.906422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907852 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907877 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907895 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:01 crc kubenswrapper[4985]: I0128 18:15:01.907907 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:01Z","lastTransitionTime":"2026-01-28T18:15:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010654 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010702 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010711 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010726 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.010736 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114034 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114083 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114095 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114115 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.114132 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216449 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216499 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216516 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.216529 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263674 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263720 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.263821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.263850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.263978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.264062 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:02 crc kubenswrapper[4985]: E0128 18:15:02.264145 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319678 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319735 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319754 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319781 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.319803 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422871 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422938 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422955 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422980 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.422997 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526181 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526457 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526470 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526486 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.526500 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.607018 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 13:35:55.479402302 +0000 UTC Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630175 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630186 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630203 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.630216 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733911 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733957 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733984 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.733995 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837191 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837214 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.837307 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940234 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940278 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940304 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:02 crc kubenswrapper[4985]: I0128 18:15:02.940325 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:02Z","lastTransitionTime":"2026-01-28T18:15:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043038 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043104 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043121 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043145 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.043160 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.145933 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146005 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146026 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146060 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.146081 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250162 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250216 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250231 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250270 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.250284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353208 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353262 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353275 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.353284 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456179 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456296 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456323 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456355 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.456377 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559375 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559388 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559407 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.559421 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.607446 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 23:54:08.135925844 +0000 UTC Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663706 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663724 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663748 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.663766 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767004 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767072 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767091 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767116 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.767138 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870489 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870579 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870605 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.870663 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973241 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973326 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973343 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973365 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:03 crc kubenswrapper[4985]: I0128 18:15:03.973379 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:03Z","lastTransitionTime":"2026-01-28T18:15:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076247 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076349 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076366 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076391 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.076410 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180185 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180198 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180217 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.180233 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263467 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263592 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.263672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263755 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.263807 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.263934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.264002 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:04 crc kubenswrapper[4985]: E0128 18:15:04.264187 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283014 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283098 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283133 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.283157 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387717 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387803 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387828 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387863 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.387886 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491496 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491589 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491612 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491642 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.491665 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594580 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594647 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594670 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594701 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.594723 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.608051 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:34:20.236524035 +0000 UTC Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699312 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699422 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.699442 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807643 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807720 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807764 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807805 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.807830 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911397 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911462 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911481 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911505 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:04 crc kubenswrapper[4985]: I0128 18:15:04.911524 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:04Z","lastTransitionTime":"2026-01-28T18:15:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013907 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013942 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013951 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013966 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.013976 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116226 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116307 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116321 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116344 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.116363 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219428 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219542 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219640 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.219667 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322364 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322419 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322433 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322453 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.322466 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425480 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425558 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425572 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425595 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.425611 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528596 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528664 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528676 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528696 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.528709 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.608707 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 06:23:11.110465825 +0000 UTC Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631746 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631814 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631831 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631857 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.631880 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735076 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735127 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735141 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735165 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.735177 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839195 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839325 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839350 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839379 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.839399 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942488 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942565 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942577 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942598 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:05 crc kubenswrapper[4985]: I0128 18:15:05.942616 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:05Z","lastTransitionTime":"2026-01-28T18:15:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046390 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046476 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046498 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046533 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.046562 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184490 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184564 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184587 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184615 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.184636 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263722 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.263792 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.263856 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.263879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.264033 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:06 crc kubenswrapper[4985]: E0128 18:15:06.264298 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286916 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286949 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286958 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286972 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.286982 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390037 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390088 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390099 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390117 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.390130 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493031 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493103 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493123 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493148 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.493168 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574810 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574872 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574889 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574915 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.574933 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.608930 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 17:50:21.356361773 +0000 UTC Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.608994 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612503 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612556 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612590 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612609 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.612624 4985 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T18:15:06Z","lastTransitionTime":"2026-01-28T18:15:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.620658 4985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.652975 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc"] Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.653619 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.655623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.656154 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.656633 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.657156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725594 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.725748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.734949 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-6j9qp" podStartSLOduration=86.73491856 podStartE2EDuration="1m26.73491856s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.720132545 +0000 UTC m=+117.546695376" watchObservedRunningTime="2026-01-28 18:15:06.73491856 +0000 UTC m=+117.561481401" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.735201 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podStartSLOduration=86.735193948 podStartE2EDuration="1m26.735193948s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.734651382 +0000 UTC m=+117.561214243" watchObservedRunningTime="2026-01-28 18:15:06.735193948 +0000 UTC m=+117.561756789" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.794478 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g2g4k" podStartSLOduration=86.794447709 podStartE2EDuration="1m26.794447709s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.772813733 +0000 UTC m=+117.599376604" watchObservedRunningTime="2026-01-28 18:15:06.794447709 +0000 UTC m=+117.621010570" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.824823 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=31.82480241 podStartE2EDuration="31.82480241s" podCreationTimestamp="2026-01-28 18:14:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.823889944 +0000 UTC m=+117.650452765" watchObservedRunningTime="2026-01-28 18:15:06.82480241 +0000 UTC m=+117.651365231" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.826896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.827050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.827125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4c5ff91d-acf0-42d7-877b-c60b68cd5248-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.828196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4c5ff91d-acf0-42d7-877b-c60b68cd5248-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.838002 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5ff91d-acf0-42d7-877b-c60b68cd5248-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.838318 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=18.838302667 podStartE2EDuration="18.838302667s" podCreationTimestamp="2026-01-28 18:14:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.837006129 +0000 UTC m=+117.663568950" watchObservedRunningTime="2026-01-28 18:15:06.838302667 +0000 UTC m=+117.664865488" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.846122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4c5ff91d-acf0-42d7-877b-c60b68cd5248-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9bxpc\" (UID: \"4c5ff91d-acf0-42d7-877b-c60b68cd5248\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.853716 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.853687689 podStartE2EDuration="1m30.853687689s" podCreationTimestamp="2026-01-28 18:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.852297488 +0000 UTC m=+117.678860329" watchObservedRunningTime="2026-01-28 18:15:06.853687689 +0000 UTC m=+117.680250530" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.881389 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=63.881362872 podStartE2EDuration="1m3.881362872s" podCreationTimestamp="2026-01-28 18:14:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.869071541 +0000 UTC m=+117.695634372" watchObservedRunningTime="2026-01-28 18:15:06.881362872 +0000 UTC m=+117.707925713" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.955641 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-dlz95" podStartSLOduration=86.955613893 podStartE2EDuration="1m26.955613893s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.955056137 +0000 UTC m=+117.781618958" watchObservedRunningTime="2026-01-28 18:15:06.955613893 +0000 UTC m=+117.782176714" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.969634 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=90.969609244 podStartE2EDuration="1m30.969609244s" podCreationTimestamp="2026-01-28 18:13:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.968926464 +0000 UTC m=+117.795489295" watchObservedRunningTime="2026-01-28 18:15:06.969609244 +0000 UTC m=+117.796172065" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.972429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" Jan 28 18:15:06 crc kubenswrapper[4985]: I0128 18:15:06.983387 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9xm27" podStartSLOduration=86.983364729 podStartE2EDuration="1m26.983364729s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:06.983356638 +0000 UTC m=+117.809919489" watchObservedRunningTime="2026-01-28 18:15:06.983364729 +0000 UTC m=+117.809927550" Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.001287 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-xvwg5" podStartSLOduration=87.001268714 podStartE2EDuration="1m27.001268714s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:07.001135431 +0000 UTC m=+117.827698272" watchObservedRunningTime="2026-01-28 18:15:07.001268714 +0000 UTC m=+117.827831545" Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.057791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" event={"ID":"4c5ff91d-acf0-42d7-877b-c60b68cd5248","Type":"ContainerStarted","Data":"73b3b1bacd3a4d22a1b1bbf67172aeb8d6cfc0a5efe9e729c221693ea17bbadb"} Jan 28 18:15:07 crc kubenswrapper[4985]: I0128 18:15:07.264383 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:15:07 crc kubenswrapper[4985]: E0128 18:15:07.264660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-zd8w7_openshift-ovn-kubernetes(bd7b8cde-d2fe-4842-857e-545172f5bd12)\"" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.062855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" event={"ID":"4c5ff91d-acf0-42d7-877b-c60b68cd5248","Type":"ContainerStarted","Data":"e828b99afd1d732b6cbe43ee2cfef2620b6af0c16cc64d0449320baebed48dcd"} Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263509 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.263697 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:08 crc kubenswrapper[4985]: I0128 18:15:08.263522 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.263810 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.264092 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:08 crc kubenswrapper[4985]: E0128 18:15:08.264172 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263755 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263803 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:10 crc kubenswrapper[4985]: I0128 18:15:10.263707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.263914 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264009 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264129 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:10 crc kubenswrapper[4985]: E0128 18:15:10.264245 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:11 crc kubenswrapper[4985]: E0128 18:15:11.267524 4985 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 18:15:11 crc kubenswrapper[4985]: E0128 18:15:11.697057 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263536 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263601 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263618 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:12 crc kubenswrapper[4985]: I0128 18:15:12.263636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263735 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263829 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:12 crc kubenswrapper[4985]: E0128 18:15:12.263906 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263794 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263842 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264100 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263913 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:14 crc kubenswrapper[4985]: I0128 18:15:14.263862 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264648 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264782 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:14 crc kubenswrapper[4985]: E0128 18:15:14.264944 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.263947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:16 crc kubenswrapper[4985]: I0128 18:15:16.264165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264181 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264368 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264496 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.264676 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:16 crc kubenswrapper[4985]: E0128 18:15:16.698982 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.098935 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100080 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/0.log" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100159 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" exitCode=1 Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c"} Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100288 4985 scope.go:117] "RemoveContainer" containerID="9fa6664683362f60f75a38eb830fad8eb0edce293d0e9b025cf4b3f09f630ebb" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.100957 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:15:17 crc kubenswrapper[4985]: E0128 18:15:17.101488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:15:17 crc kubenswrapper[4985]: I0128 18:15:17.129886 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9bxpc" podStartSLOduration=97.129859821 podStartE2EDuration="1m37.129859821s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:08.081719196 +0000 UTC m=+118.908282047" watchObservedRunningTime="2026-01-28 18:15:17.129859821 +0000 UTC m=+127.956422682" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.106658 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263591 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:18 crc kubenswrapper[4985]: I0128 18:15:18.263554 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.263791 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264065 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:18 crc kubenswrapper[4985]: E0128 18:15:18.264313 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.263789 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:20 crc kubenswrapper[4985]: I0128 18:15:20.263821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.263966 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.264067 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:20 crc kubenswrapper[4985]: E0128 18:15:20.264139 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:21 crc kubenswrapper[4985]: E0128 18:15:21.699936 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.263423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.263666 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264439 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.263415 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.264814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:22 crc kubenswrapper[4985]: I0128 18:15:22.264901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.264973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:22 crc kubenswrapper[4985]: E0128 18:15:22.265179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.037107 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.132033 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.135693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerStarted","Data":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.135732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:23 crc kubenswrapper[4985]: E0128 18:15:23.136016 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:23 crc kubenswrapper[4985]: I0128 18:15:23.171307 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podStartSLOduration=103.171282099 podStartE2EDuration="1m43.171282099s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:23.171174866 +0000 UTC m=+133.997737727" watchObservedRunningTime="2026-01-28 18:15:23.171282099 +0000 UTC m=+133.997844930" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263420 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263451 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:24 crc kubenswrapper[4985]: I0128 18:15:24.263521 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263669 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:24 crc kubenswrapper[4985]: E0128 18:15:24.263846 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:25 crc kubenswrapper[4985]: I0128 18:15:25.263826 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:25 crc kubenswrapper[4985]: E0128 18:15:25.264078 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:26 crc kubenswrapper[4985]: I0128 18:15:26.263706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263835 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263933 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.263979 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:26 crc kubenswrapper[4985]: E0128 18:15:26.702027 4985 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:15:27 crc kubenswrapper[4985]: I0128 18:15:27.263834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:27 crc kubenswrapper[4985]: E0128 18:15:27.264157 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.263867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.263902 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264211 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.264305 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264420 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:28 crc kubenswrapper[4985]: E0128 18:15:28.264620 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:28 crc kubenswrapper[4985]: I0128 18:15:28.265365 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.159661 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.160101 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535"} Jan 28 18:15:29 crc kubenswrapper[4985]: I0128 18:15:29.263367 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:29 crc kubenswrapper[4985]: E0128 18:15:29.263551 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.263910 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.263978 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:30 crc kubenswrapper[4985]: I0128 18:15:30.264106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264365 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 18:15:30 crc kubenswrapper[4985]: E0128 18:15:30.264608 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 18:15:31 crc kubenswrapper[4985]: I0128 18:15:31.263122 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:31 crc kubenswrapper[4985]: E0128 18:15:31.265155 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-hrd6k" podUID="e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263488 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263544 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.263580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.266583 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.266769 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.268355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:15:32 crc kubenswrapper[4985]: I0128 18:15:32.268398 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.263635 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.267164 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:15:33 crc kubenswrapper[4985]: I0128 18:15:33.267241 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.383629 4985 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.440370 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.441334 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.443505 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.444357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.447483 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.447855 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.448076 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.448363 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.450000 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.450155 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.451125 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.452386 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.453201 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.454211 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.455266 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456594 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456724 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.456942 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457270 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457891 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.457960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458125 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458202 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458401 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458446 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458666 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.458881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.459101 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.463429 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.464306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.465153 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.465681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466305 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466800 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.466980 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.467077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.467821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.468550 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.468981 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.469930 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.472030 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.472580 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.473167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.475791 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.476485 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477620 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477870 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477956 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.477962 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478001 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478064 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478179 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.478736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.480826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.481743 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.482739 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483149 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483236 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483396 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483576 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483608 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483417 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.483750 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.484192 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.485755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486211 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486612 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.486842 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.488186 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.492210 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.494225 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.500446 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.504029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.529593 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.531387 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.531675 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.532660 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533145 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533442 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533593 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533800 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.533995 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534737 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534905 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.534943 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535121 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535662 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.535771 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.536568 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.537885 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538180 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538219 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538336 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538416 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538465 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538351 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538635 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538694 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538393 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538850 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538648 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539030 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539214 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539449 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538583 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540101 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.538982 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539808 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540292 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540005 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.539879 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540033 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540639 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540871 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541036 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541119 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541422 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541716 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.541993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542099 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542177 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543352 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543550 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542144 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.542310 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.543882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544021 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544111 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.544754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545194 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545393 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545457 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545428 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545608 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545689 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545785 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545788 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545814 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545721 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.545950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546034 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546078 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546293 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546333 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546369 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546431 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546552 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546699 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546728 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546790 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546864 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546899 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.546975 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547343 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547543 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547695 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.547957 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.548113 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.548304 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.549515 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.551593 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.551888 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.555151 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.540928 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557148 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.557767 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.561133 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.561353 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.563421 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.564209 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.564846 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.565711 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.580024 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.594115 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.594422 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.607789 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.608753 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.608865 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.610340 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.611093 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.615017 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.617444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.618497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.619688 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.621874 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.622777 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.624523 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.625119 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.625928 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.626871 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.628581 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.629396 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.631467 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.632433 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.633613 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.634512 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.639285 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.640374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.642318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.643067 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.644890 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.645085 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.645884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.646649 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647891 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.647984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648027 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648047 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648083 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648138 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648170 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648186 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648241 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.648863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.649675 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650236 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650415 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650579 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650867 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650983 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.650805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651055 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651108 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-config\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651219 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651240 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651290 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit-dir\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653124 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be08d23e-d6c9-4b42-904b-c36b05dfc316-service-ca-bundle\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653152 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651874 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653271 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-policies\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.653459 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.654104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.654353 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.651162 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-images\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655084 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.652985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655458 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.655903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656413 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656517 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.656952 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.657133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.657239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662516 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662543 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.659179 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662627 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/ebf5f82e-2a14-49d9-b670-59ed73e71203-node-pullsecrets\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-encryption-config\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-serving-cert\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662709 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/218b57d8-c3a3-4a33-a3ef-6701cf557911-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662773 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662811 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.659564 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/715ad1e8-6659-4a18-a007-ad31ffa7044e-available-featuregates\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.662960 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.660383 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.663054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c731b198-314f-46a9-ad13-a4cc6c7bab94-audit-dir\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.663931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/218b57d8-c3a3-4a33-a3ef-6701cf557911-config\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.664424 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-qnrsp"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665333 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665541 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c731b198-314f-46a9-ad13-a4cc6c7bab94-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.658230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.665927 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666893 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.666960 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667109 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667293 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0a8b060f-1416-4676-af77-45c0b411ff59-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.667467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668460 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668504 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668541 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668637 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.668660 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.669405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-trusted-ca-bundle\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.669452 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-audit\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/715ad1e8-6659-4a18-a007-ad31ffa7044e-serving-cert\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670771 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.670874 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5691988c-c881-437e-aa60-317e424b3170-trusted-ca\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671163 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.671695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672120 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672574 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672739 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672768 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.672985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673012 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673493 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.673775 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.674654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/010ced82-1614-4ade-958b-d12ea6cda1b9-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.677982 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.678568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679120 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679988 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680579 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-serving-cert\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.679198 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.680725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-serving-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.681271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-image-import-ca\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.681807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-machine-approver-tls\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.682590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.683568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.685064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ebf5f82e-2a14-49d9-b670-59ed73e71203-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.688959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-encryption-config\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689420 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-auth-proxy-config\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689573 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/c731b198-314f-46a9-ad13-a4cc6c7bab94-etcd-client\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/010ced82-1614-4ade-958b-d12ea6cda1b9-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.689886 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5691988c-c881-437e-aa60-317e424b3170-metrics-tls\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690005 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690370 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.690715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ebf5f82e-2a14-49d9-b670-59ed73e71203-etcd-client\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.691140 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.691364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.692229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0a8b060f-1416-4676-af77-45c0b411ff59-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.692521 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.694423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.694887 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.698514 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.700902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.701527 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.703911 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.713182 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.713425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be08d23e-d6c9-4b42-904b-c36b05dfc316-serving-cert\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.716219 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.719812 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.719854 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.720938 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.722513 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724040 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.724659 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.725965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.727228 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.729031 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.730470 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.731663 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.733099 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.734349 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.735689 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.737076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.737123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.738412 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.739704 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.741039 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.742286 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.743586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.744170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.744858 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.746016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.747600 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-2lzzr"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.748132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.749390 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.750382 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.750914 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.764844 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.774994 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775056 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775116 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775262 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775281 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775371 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.775479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.776195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9675b92d-1a0c-460b-bbad-cd6abab61f2f-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.778901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9675b92d-1a0c-460b-bbad-cd6abab61f2f-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.779293 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf0cd343-6643-4463-bb9b-6e291a601901-metrics-tls\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.784519 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.804739 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.824140 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.828900 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-serving-cert\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.844805 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.848861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-client\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.864169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.884948 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.885856 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-config\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.904856 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.906701 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.924217 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.926583 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/a1f443aa-50c0-4865-b6a3-a07d13b71e73-etcd-service-ca\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.964317 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:15:37 crc kubenswrapper[4985]: I0128 18:15:37.984563 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.004140 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.009519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.023845 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.026270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-config\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.044355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.064915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.085036 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.105184 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.124395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.127504 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-config\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.144409 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.165211 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.173044 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/db632812-bc0d-41f2-9c01-a19d40eb69be-serving-cert\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.205928 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.207056 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.219099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/db632812-bc0d-41f2-9c01-a19d40eb69be-trusted-ca\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.225189 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.244060 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.265029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.284100 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.304672 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.324862 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.344644 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.347300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.364104 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.384446 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.391325 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.425395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.445138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.464757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.484546 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.505770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.524881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.544528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.564612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.585722 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.605219 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.624997 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.662452 4985 request.go:700] Waited for 1.013031228s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/serviceaccounts/ingress-operator/token Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.673196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5bb6\" (UniqueName: \"kubernetes.io/projected/c731b198-314f-46a9-ad13-a4cc6c7bab94-kube-api-access-f5bb6\") pod \"apiserver-7bbb656c7d-v2hv6\" (UID: \"c731b198-314f-46a9-ad13-a4cc6c7bab94\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.694826 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-bound-sa-token\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.716501 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"route-controller-manager-6576b87f9c-xqdzz\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.734480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mzzm\" (UniqueName: \"kubernetes.io/projected/be08d23e-d6c9-4b42-904b-c36b05dfc316-kube-api-access-7mzzm\") pod \"authentication-operator-69f744f599-pcb4d\" (UID: \"be08d23e-d6c9-4b42-904b-c36b05dfc316\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.745499 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.749047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.765201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.795160 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.804597 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.823739 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.844048 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.848567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.895014 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxxvl\" (UniqueName: \"kubernetes.io/projected/010ced82-1614-4ade-958b-d12ea6cda1b9-kube-api-access-vxxvl\") pod \"openshift-controller-manager-operator-756b6f6bc6-b8tzt\" (UID: \"010ced82-1614-4ade-958b-d12ea6cda1b9\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.914044 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"oauth-openshift-558db77b4-fdfqq\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.924189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.934051 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6ffjk\" (UniqueName: \"kubernetes.io/projected/50627d4d-8f08-4db3-a8a4-e8b0b94b1b71-kube-api-access-6ffjk\") pod \"cluster-image-registry-operator-dc59b4c8b-4tdfc\" (UID: \"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.953297 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.954310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nw2\" (UniqueName: \"kubernetes.io/projected/218b57d8-c3a3-4a33-a3ef-6701cf557911-kube-api-access-h5nw2\") pod \"machine-api-operator-5694c8668f-hjjf7\" (UID: \"218b57d8-c3a3-4a33-a3ef-6701cf557911\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.962669 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.965304 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.974654 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.982237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"console-f9d7485db-b5t5k\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:38 crc kubenswrapper[4985]: I0128 18:15:38.985020 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.005903 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.040804 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0a8b060f-1416-4676-af77-45c0b411ff59-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-7gnfx\" (UID: \"0a8b060f-1416-4676-af77-45c0b411ff59\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.046703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.064928 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.103039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26d6j\" (UniqueName: \"kubernetes.io/projected/fa6948a7-6763-4c03-b6f9-ecfb38a8a064-kube-api-access-26d6j\") pod \"cluster-samples-operator-665b6dd947-77hkl\" (UID: \"fa6948a7-6763-4c03-b6f9-ecfb38a8a064\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.124843 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.125923 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kjx6\" (UniqueName: \"kubernetes.io/projected/ebf5f82e-2a14-49d9-b670-59ed73e71203-kube-api-access-4kjx6\") pod \"apiserver-76f77b778f-2wxf2\" (UID: \"ebf5f82e-2a14-49d9-b670-59ed73e71203\") " pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.143113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.146034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rpw6\" (UniqueName: \"kubernetes.io/projected/25061ce4-ca31-4da7-ad36-c6535e1d2028-kube-api-access-8rpw6\") pod \"downloads-7954f5f757-hpz9q\" (UID: \"25061ce4-ca31-4da7-ad36-c6535e1d2028\") " pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.162411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pcb4d"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.168766 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q29sg\" (UniqueName: \"kubernetes.io/projected/a3b95c03-1b0d-4c06-bb85-2f9ed127737b-kube-api-access-q29sg\") pod \"machine-approver-56656f9798-6qh9r\" (UID: \"a3b95c03-1b0d-4c06-bb85-2f9ed127737b\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.179801 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"controller-manager-879f6c89f-52cvd\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.185624 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.186052 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe08d23e_d6c9_4b42_904b_c36b05dfc316.slice/crio-6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1 WatchSource:0}: Error finding container 6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1: Status 404 returned error can't find the container with id 6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.199984 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.203893 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.207663 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"6b7ea25547e4a1f736567d0db68e73078c5436079bd724f4596ad496d44816d1"} Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.215144 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd061f6d6_1983_405d_93af_3e492ff49f7c.slice/crio-92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78 WatchSource:0}: Error finding container 92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78: Status 404 returned error can't find the container with id 92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.216149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.225540 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.234417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.242761 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.264245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77rrz\" (UniqueName: \"kubernetes.io/projected/715ad1e8-6659-4a18-a007-ad31ffa7044e-kube-api-access-77rrz\") pod \"openshift-config-operator-7777fb866f-gm5gt\" (UID: \"715ad1e8-6659-4a18-a007-ad31ffa7044e\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.279738 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.314287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.317062 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.317130 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.320060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njzzn\" (UniqueName: \"kubernetes.io/projected/5691988c-c881-437e-aa60-317e424b3170-kube-api-access-njzzn\") pod \"ingress-operator-5b745b69d9-8fcwv\" (UID: \"5691988c-c881-437e-aa60-317e424b3170\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.323705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.347592 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.356565 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.363758 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.366973 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.383861 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.405387 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.416460 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.424787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.440175 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.449755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.460046 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hjjf7"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.465541 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.483992 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.488917 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.494409 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod218b57d8_c3a3_4a33_a3ef_6701cf557911.slice/crio-9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe WatchSource:0}: Error finding container 9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe: Status 404 returned error can't find the container with id 9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.499121 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.509846 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.524151 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.525306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.534088 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc731b198_314f_46a9_ad13_a4cc6c7bab94.slice/crio-7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587 WatchSource:0}: Error finding container 7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587: Status 404 returned error can't find the container with id 7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.544886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.545270 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.563166 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-2wxf2"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.564058 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.584409 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.604609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.610095 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.622657 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf5f82e_2a14_49d9_b670_59ed73e71203.slice/crio-91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2 WatchSource:0}: Error finding container 91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2: Status 404 returned error can't find the container with id 91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2 Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.622934 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod81ef78af_dc11_4231_9693_eb088718d103.slice/crio-6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2 WatchSource:0}: Error finding container 6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2: Status 404 returned error can't find the container with id 6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.623930 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.630746 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.643624 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.662535 4985 request.go:700] Waited for 1.925026435s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Dcanary-serving-cert&limit=500&resourceVersion=0 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.664287 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.686234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.706936 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.719564 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.723726 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.746575 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.752662 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-hpz9q"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.764536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.785533 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.804587 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: W0128 18:15:39.830615 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25061ce4_ca31_4da7_ad36_c6535e1d2028.slice/crio-d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6 WatchSource:0}: Error finding container d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6: Status 404 returned error can't find the container with id d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6 Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.831418 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.831595 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.843235 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.862192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.881103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dn6fx\" (UniqueName: \"kubernetes.io/projected/db632812-bc0d-41f2-9c01-a19d40eb69be-kube-api-access-dn6fx\") pod \"console-operator-58897d9998-j6799\" (UID: \"db632812-bc0d-41f2-9c01-a19d40eb69be\") " pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.918008 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzrqc\" (UniqueName: \"kubernetes.io/projected/f0e8632e-effa-4fe6-ac4d-8c33abe6eecc-kube-api-access-fzrqc\") pod \"kube-storage-version-migrator-operator-b67b599dd-k96zr\" (UID: \"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.923331 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.925359 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmnqc\" (UniqueName: \"kubernetes.io/projected/bf0cd343-6643-4463-bb9b-6e291a601901-kube-api-access-mmnqc\") pod \"dns-operator-744455d44c-bmvks\" (UID: \"bf0cd343-6643-4463-bb9b-6e291a601901\") " pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.938535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.939456 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt"] Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.944876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wl24\" (UniqueName: \"kubernetes.io/projected/a1f443aa-50c0-4865-b6a3-a07d13b71e73-kube-api-access-9wl24\") pod \"etcd-operator-b45778765-hk2lj\" (UID: \"a1f443aa-50c0-4865-b6a3-a07d13b71e73\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.966142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4skx\" (UniqueName: \"kubernetes.io/projected/9675b92d-1a0c-460b-bbad-cd6abab61f2f-kube-api-access-v4skx\") pod \"openshift-apiserver-operator-796bbdcf4f-vgvlm\" (UID: \"9675b92d-1a0c-460b-bbad-cd6abab61f2f\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.979895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjg8s\" (UniqueName: \"kubernetes.io/projected/d3e3ff22-4547-453f-bd6a-bf8d4098f3a3-kube-api-access-jjg8s\") pod \"migrator-59844c95c7-k5vgf\" (UID: \"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:39 crc kubenswrapper[4985]: I0128 18:15:39.999218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c08b13aa-cae7-420a-ae3b-4846ea74c5c8-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-z9cdk\" (UID: \"c08b13aa-cae7-420a-ae3b-4846ea74c5c8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.032870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033008 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033088 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033111 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033173 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.033237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.033679 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.533662017 +0000 UTC m=+151.360224838 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.133937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134099 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134144 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134189 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134211 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134339 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134439 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134498 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134521 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134804 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134861 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.134910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136451 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136692 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.136957 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137063 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07d9a024-6342-42ba-8a0b-4db3aa777a82-config\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137487 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.137690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.137784 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.637756505 +0000 UTC m=+151.464319356 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.138354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.139784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140755 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.140992 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141277 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141568 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.141836 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.145708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.145894 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.146305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.146653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.147490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.147762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.147862 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.647824141 +0000 UTC m=+151.474387012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148198 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148411 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148665 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148813 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.148986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149007 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.149288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.153817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.187647 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.187813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.193383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.200535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.208712 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213699 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" event={"ID":"010ced82-1614-4ade-958b-d12ea6cda1b9","Type":"ContainerStarted","Data":"90508a917965ce10b3d4539dd69bf2e241090c233c30aed866c0f42e7f9c8edc"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.213767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" event={"ID":"010ced82-1614-4ade-958b-d12ea6cda1b9","Type":"ContainerStarted","Data":"f56464798a61acc321f66cc28ebe165c756661bcb8e2a9030542e805fc8e8973"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.215156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"e056faac79cfd44ea89bb530737dab60b57099a92098fe4179cd9da6f2585435"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.216555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerStarted","Data":"6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.217642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" event={"ID":"0a8b060f-1416-4676-af77-45c0b411ff59","Type":"ContainerStarted","Data":"d077cd20b8c092fe39dd142d804b7246ab2b6571d885765fed2cce619176de8c"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.219343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.220725 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerStarted","Data":"92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.224231 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerStarted","Data":"7799c0504e8d1fffa9f0bc7d67e2c326156afaed4cf1d61765ba9e47c7794587"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.227635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"d3f3fdbd322417bb30c50dd78af3aba0532e0b870081cb8ae4572d5015d144e6"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.228486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" event={"ID":"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71","Type":"ContainerStarted","Data":"22ebcfd1c51c5e05131ab99ff373fbefb60df0542ade3322f4db099d62fbcab9"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.229998 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"b7e3372169d8ed5c188bb717f6a1c8906c055796b66786e1124e3c02bd76e20f"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.230202 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.231548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"9b3f84cabb73fc20ad9534b981fb6e0a0313d0785c99dd0d15c0f9cdc6e4debe"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.232498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"91cfdcde5ecb33c60f3342cf5501d1b216c7e5139e2f48c5721944a5c98e3ec2"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.233794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerStarted","Data":"0e823a46854aa252fe9015e01e9cddb6f75ae7ba4ce62f7d7338ee347ff378f1"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.235032 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerStarted","Data":"0c4fa24c07af4cdb6a65715225f501e2d489d532f902d5a36a0225bc9b457962"} Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250570 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250765 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.250868 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.750832638 +0000 UTC m=+151.577395499 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.250996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-registration-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251106 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251168 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251204 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251244 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251294 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251353 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251374 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251404 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251470 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251513 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251553 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251573 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251683 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251705 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251770 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251815 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251834 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251855 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.251935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-plugins-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.253376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3c194d09-8a64-45a1-b40b-d1ea249b2626-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.253977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-images\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.254474 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.75442812 +0000 UTC m=+151.580990971 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254691 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254783 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254852 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.258649 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-profile-collector-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.258959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/365a9e45-74e9-4231-8ccf-c5fbf200ab83-config-volume\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.259208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7ecd4c5-97bd-4190-b474-a745b00d58aa-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.259435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.254933 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.260139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.260632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-mountpoint-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261321 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-socket-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261398 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-cabundle\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.261599 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/99828525-9397-448d-9a51-bc0da88038ac-csi-data-dir\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.262551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/70124ff4-00b0-41ef-947d-55eda7af02db-tmpfs\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.262564 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0953ef82-fce5-4008-85c8-b1377a8f66a2-config\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.263349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb7bad3c-725d-4a80-b398-140c6acf3825-service-ca-bundle\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264206 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7ecd4c5-97bd-4190-b474-a745b00d58aa-proxy-tls\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264365 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-signing-key\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.264396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/365a9e45-74e9-4231-8ccf-c5fbf200ab83-metrics-tls\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.265020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.265191 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cae1c988-06ab-4748-a62d-5bd7301b2c8d-srv-cert\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.266086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.266771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.267012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-stats-auth\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.267586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-metrics-certs\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.268749 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/45774b89-be22-4692-a944-e5f12f898ea6-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.268972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cb7bad3c-725d-4a80-b398-140c6acf3825-default-certificate\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.269762 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.271370 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.285677 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/97299e5b-e1d8-41b0-b1b2-c5658f42a436-cert\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286209 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-webhook-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-node-bootstrap-token\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0953ef82-fce5-4008-85c8-b1377a8f66a2-serving-cert\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286537 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07d9a024-6342-42ba-8a0b-4db3aa777a82-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.286721 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-certs\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-profile-collector-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.287966 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"marketplace-operator-79b997595-b5wzm\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.288666 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.288894 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/70124ff4-00b0-41ef-947d-55eda7af02db-apiservice-cert\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.289866 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07d9a024-6342-42ba-8a0b-4db3aa777a82-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-x6vjm\" (UID: \"07d9a024-6342-42ba-8a0b-4db3aa777a82\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.290892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/fa42b50c-59ed-4523-a6a0-994a72ff7071-srv-cert\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.292926 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3c194d09-8a64-45a1-b40b-d1ea249b2626-proxy-tls\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.310194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmx8w\" (UniqueName: \"kubernetes.io/projected/fa42b50c-59ed-4523-a6a0-994a72ff7071-kube-api-access-nmx8w\") pod \"olm-operator-6b444d44fb-lghqh\" (UID: \"fa42b50c-59ed-4523-a6a0-994a72ff7071\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.324064 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhf2x\" (UniqueName: \"kubernetes.io/projected/cb7bad3c-725d-4a80-b398-140c6acf3825-kube-api-access-rhf2x\") pod \"router-default-5444994796-qnrsp\" (UID: \"cb7bad3c-725d-4a80-b398-140c6acf3825\") " pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.326825 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.345568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xvzz\" (UniqueName: \"kubernetes.io/projected/365a9e45-74e9-4231-8ccf-c5fbf200ab83-kube-api-access-9xvzz\") pod \"dns-default-fn9d5\" (UID: \"365a9e45-74e9-4231-8ccf-c5fbf200ab83\") " pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.361963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.362479 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.86245351 +0000 UTC m=+151.689016331 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.364173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.364554 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.86454495 +0000 UTC m=+151.691107771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.369799 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"collect-profiles-29493735-f4d57\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.386163 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zb8kf\" (UniqueName: \"kubernetes.io/projected/7f89cfdf-2a4d-4582-94f4-e53c45c3e09c-kube-api-access-zb8kf\") pod \"control-plane-machine-set-operator-78cbb6b69f-wp27s\" (UID: \"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.400756 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnnwc\" (UniqueName: \"kubernetes.io/projected/ab37c3ff-de29-4cba-8c5b-83d4fdca736c-kube-api-access-hnnwc\") pod \"machine-config-server-2lzzr\" (UID: \"ab37c3ff-de29-4cba-8c5b-83d4fdca736c\") " pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.421075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dh4z6\" (UniqueName: \"kubernetes.io/projected/99828525-9397-448d-9a51-bc0da88038ac-kube-api-access-dh4z6\") pod \"csi-hostpathplugin-5zj27\" (UID: \"99828525-9397-448d-9a51-bc0da88038ac\") " pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.445000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d69sn\" (UniqueName: \"kubernetes.io/projected/0953ef82-fce5-4008-85c8-b1377a8f66a2-kube-api-access-d69sn\") pod \"service-ca-operator-777779d784-cq5bj\" (UID: \"0953ef82-fce5-4008-85c8-b1377a8f66a2\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.461129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qfnc4\" (UniqueName: \"kubernetes.io/projected/a7ecd4c5-97bd-4190-b474-a745b00d58aa-kube-api-access-qfnc4\") pod \"machine-config-operator-74547568cd-9l594\" (UID: \"a7ecd4c5-97bd-4190-b474-a745b00d58aa\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.469519 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.469956 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:40.969942405 +0000 UTC m=+151.796505226 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.484376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjmv7\" (UniqueName: \"kubernetes.io/projected/893bf4c0-7b07-4e49-bff4-9ed7d52b3196-kube-api-access-gjmv7\") pod \"package-server-manager-789f6589d5-pdwpf\" (UID: \"893bf4c0-7b07-4e49-bff4-9ed7d52b3196\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.516720 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.541030 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqc7\" (UniqueName: \"kubernetes.io/projected/0e4812cb-3dc4-4d34-b24d-fd5f4a507030-kube-api-access-vqqc7\") pod \"service-ca-9c57cc56f-fzzsl\" (UID: \"0e4812cb-3dc4-4d34-b24d-fd5f4a507030\") " pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.546745 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.559609 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.566121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.567466 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnbvm\" (UniqueName: \"kubernetes.io/projected/70124ff4-00b0-41ef-947d-55eda7af02db-kube-api-access-qnbvm\") pod \"packageserver-d55dfcdfc-tlrkn\" (UID: \"70124ff4-00b0-41ef-947d-55eda7af02db\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.570854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.571307 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.071292285 +0000 UTC m=+151.897855106 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.575953 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.580999 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l87gp\" (UniqueName: \"kubernetes.io/projected/45774b89-be22-4692-a944-e5f12f898ea6-kube-api-access-l87gp\") pod \"multus-admission-controller-857f4d67dd-6ndmg\" (UID: \"45774b89-be22-4692-a944-e5f12f898ea6\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.581385 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.587990 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k56kt\" (UniqueName: \"kubernetes.io/projected/3c194d09-8a64-45a1-b40b-d1ea249b2626-kube-api-access-k56kt\") pod \"machine-config-controller-84d6567774-cbfgv\" (UID: \"3c194d09-8a64-45a1-b40b-d1ea249b2626\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.589676 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.599041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.601095 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf"] Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.601173 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qw4sq\" (UniqueName: \"kubernetes.io/projected/cae1c988-06ab-4748-a62d-5bd7301b2c8d-kube-api-access-qw4sq\") pod \"catalog-operator-68c6474976-4lnjx\" (UID: \"cae1c988-06ab-4748-a62d-5bd7301b2c8d\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.604950 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.614314 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.618039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwr85\" (UniqueName: \"kubernetes.io/projected/97299e5b-e1d8-41b0-b1b2-c5658f42a436-kube-api-access-rwr85\") pod \"ingress-canary-g5knd\" (UID: \"97299e5b-e1d8-41b0-b1b2-c5658f42a436\") " pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.621204 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.634230 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.641487 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.649363 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-g5knd" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.657349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-2lzzr" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.673010 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.673672 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.173650234 +0000 UTC m=+152.000213055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: W0128 18:15:40.674454 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3e3ff22_4547_453f_bd6a_bf8d4098f3a3.slice/crio-3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304 WatchSource:0}: Error finding container 3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304: Status 404 returned error can't find the container with id 3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304 Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.674565 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.775390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.775796 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.275775426 +0000 UTC m=+152.102338247 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.858895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.879691 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.879901 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.379877635 +0000 UTC m=+152.206440456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.881866 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.882286 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.382269853 +0000 UTC m=+152.208832674 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.983897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:40 crc kubenswrapper[4985]: E0128 18:15:40.984237 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.48422293 +0000 UTC m=+152.310785751 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:40 crc kubenswrapper[4985]: I0128 18:15:40.987816 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.087767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.088355 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.588331708 +0000 UTC m=+152.414894589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.155212 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab37c3ff_de29_4cba_8c5b_83d4fdca736c.slice/crio-b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d WatchSource:0}: Error finding container b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d: Status 404 returned error can't find the container with id b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190078 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190129 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.190240 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.190652 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.690634136 +0000 UTC m=+152.517196957 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.196150 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.251525 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" event={"ID":"9675b92d-1a0c-460b-bbad-cd6abab61f2f","Type":"ContainerStarted","Data":"c359257c5b550240d6932b83414ea782aee988a80cd656c2b3c664f14ea5664d"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.252546 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.291813 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.292938 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.792925963 +0000 UTC m=+152.619488774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295494 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerStarted","Data":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295521 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"0d1b21b030c24fdc6bba830677624d21cbb5cf6e3e7d4ae74ad81460cf48c5d3"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295534 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-bmvks"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.295550 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hk2lj"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.319453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"2bcc0ea57ad00fb5d19d309b535cd61c28cf5580d0d5cb443d19f13fe3299db4"} Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.320205 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1f443aa_50c0_4865_b6a3_a07d13b71e73.slice/crio-b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b WatchSource:0}: Error finding container b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b: Status 404 returned error can't find the container with id b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.326627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"61f4f9cfcfb91c7e2b3605826caa8b868277c5073550dd802503532a73b730ed"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.347415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2lzzr" event={"ID":"ab37c3ff-de29-4cba-8c5b-83d4fdca736c","Type":"ContainerStarted","Data":"b9ea7903d1ee21f12aa1d5dc224da3033337c485d9b2d1882bb9a7756312ae0d"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.364699 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.364740 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"3d08275f7d255075cf5051dcfbc6e0d3d24d15b16d7a2d77c2254bdf95636304"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.372794 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-j6799"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.388464 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9l594"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.393499 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.394052 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.894033006 +0000 UTC m=+152.720595827 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.397091 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.397143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerStarted","Data":"d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.404690 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.404724 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.407086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"53006daf2106b60c7535f2e694eae0c2301a9a6300755e25161feabe1eba81f5"} Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.412681 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk"] Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.435879 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf0cd343_6643_4463_bb9b_6e291a601901.slice/crio-caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d WatchSource:0}: Error finding container caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d: Status 404 returned error can't find the container with id caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.441504 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa42b50c_59ed_4523_a6a0_994a72ff7071.slice/crio-f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983 WatchSource:0}: Error finding container f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983: Status 404 returned error can't find the container with id f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983 Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.444421 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb632812_bc0d_41f2_9c01_a19d40eb69be.slice/crio-06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1 WatchSource:0}: Error finding container 06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1: Status 404 returned error can't find the container with id 06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1 Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.457593 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf0e8632e_effa_4fe6_ac4d_8c33abe6eecc.slice/crio-cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea WatchSource:0}: Error finding container cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea: Status 404 returned error can't find the container with id cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.462738 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07d9a024_6342_42ba_8a0b_4db3aa777a82.slice/crio-8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce WatchSource:0}: Error finding container 8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce: Status 404 returned error can't find the container with id 8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.495082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.496310 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:41.996282812 +0000 UTC m=+152.822845783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.548858 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-fzzsl"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.555368 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.561051 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6ndmg"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.573359 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.586770 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.592579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx"] Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.596464 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.096444558 +0000 UTC m=+152.923007369 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.596834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.597291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.597682 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.097665773 +0000 UTC m=+152.924228594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.599286 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.602448 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.699326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.699600 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.199559329 +0000 UTC m=+153.026122190 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.699734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.700561 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.200507056 +0000 UTC m=+153.027069897 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.715744 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-5zj27"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.788579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-g5knd"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.800890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.801458 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.301438034 +0000 UTC m=+153.128000855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.809105 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e4812cb_3dc4_4d34_b24d_fd5f4a507030.slice/crio-1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0 WatchSource:0}: Error finding container 1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0: Status 404 returned error can't find the container with id 1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0 Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.819626 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj"] Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.822601 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-fn9d5"] Jan 28 18:15:41 crc kubenswrapper[4985]: W0128 18:15:41.838376 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97299e5b_e1d8_41b0_b1b2_c5658f42a436.slice/crio-0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00 WatchSource:0}: Error finding container 0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00: Status 404 returned error can't find the container with id 0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00 Jan 28 18:15:41 crc kubenswrapper[4985]: I0128 18:15:41.902317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:41 crc kubenswrapper[4985]: E0128 18:15:41.904654 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.404636857 +0000 UTC m=+153.231199668 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.003613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.004060 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.504039771 +0000 UTC m=+153.330602592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105739 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.105823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.106873 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.606856933 +0000 UTC m=+153.433419754 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.110448 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.172640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.177891 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podStartSLOduration=122.17787069 podStartE2EDuration="2m2.17787069s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.160055165 +0000 UTC m=+152.986617986" watchObservedRunningTime="2026-01-28 18:15:42.17787069 +0000 UTC m=+153.004433511" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.206824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.207092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.207164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.207370 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.707355038 +0000 UTC m=+153.533917849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.216759 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.216920 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.227637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.309321 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.809300445 +0000 UTC m=+153.635863276 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.308855 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.411201 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.411392 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.911364116 +0000 UTC m=+153.737926937 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.411516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.412035 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:42.912028095 +0000 UTC m=+153.738590916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.425313 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5knd" event={"ID":"97299e5b-e1d8-41b0-b1b2-c5658f42a436","Type":"ContainerStarted","Data":"0211f909758d157e47d98a4656be1ad4ffedcc85e0b1a95b92ae4be01693eb00"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.430130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"0d9d752a79dcaf04cf8b3f62e0482bd30919b4e1ceebcc26a5724adbdcde76a1"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.466201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.473770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerStarted","Data":"943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.476310 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" event={"ID":"c08b13aa-cae7-420a-ae3b-4846ea74c5c8","Type":"ContainerStarted","Data":"1f99fac7cfe9e10b3503c2a47c0d78631d7f3448f9cc0f1b7d7d9f5215af91e8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.476347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" event={"ID":"c08b13aa-cae7-420a-ae3b-4846ea74c5c8","Type":"ContainerStarted","Data":"779328749c2fe35763334e5d9a6d775dfa61fdd788471c68340a7e74e8c74c4d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.481441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"e23e2068516c6cb6fab9f98ec03fc1a5d04d167dd1269b4b9055e1fb8f017cd4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.483821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" event={"ID":"0a8b060f-1416-4676-af77-45c0b411ff59","Type":"ContainerStarted","Data":"523379a35a8f4358688b7a5f6c4206a08b1dd03849c444c85977a9d32ca697f0"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.488869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"1b05901e0da1ee81f48449495269b7562be0c2e9e483b87c4525f64d493bf952"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.489659 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.490294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" event={"ID":"07d9a024-6342-42ba-8a0b-4db3aa777a82","Type":"ContainerStarted","Data":"8cc07f66cf536e3734e4900c2af95da4824702d6e0f524b29e0e3e1b219425ce"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.492865 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"06ea3a26c30303ff3eca9897196cc2f61f7d491aa305689f190b290e65b077b1"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.494114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerStarted","Data":"8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.497114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" event={"ID":"0953ef82-fce5-4008-85c8-b1377a8f66a2","Type":"ContainerStarted","Data":"6b6caec17afe76097b2fb413b8a01b0e5c28dd94270f42c5f88caef2787cd35b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.499111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" event={"ID":"9675b92d-1a0c-460b-bbad-cd6abab61f2f","Type":"ContainerStarted","Data":"88b597bfd1be0f2e24ec28bda9f4ca5f3afea78ad15dcd45bf22a2c4177227af"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.500172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"24ea991929f5691447c508e8f97362e7755d0ee1ce0c8580e35c8f94a2adf371"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.501302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"caa62707f85deaf8041e0f0a5513e4852c113f25dbe7abcf71c9c5125e88148d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.506062 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.508782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" event={"ID":"0e4812cb-3dc4-4d34-b24d-fd5f4a507030","Type":"ContainerStarted","Data":"1f3a4d858555971484a5fd1f6b2765de86aeaee9e636d4ebbc11f61ac9f47cf0"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.511396 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"d3c44d232afd74c9f45fb63de97eaa472860c9005aa243d3ffbc79ecd22cf1a4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.512375 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.513941 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.01391641 +0000 UTC m=+153.840479231 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.516938 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"1792476aa41bf09e5e86911b6b959eba4b9cb5a4e90cc3cf9dfa1d77a0efc8b8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" event={"ID":"a1f443aa-50c0-4865-b6a3-a07d13b71e73","Type":"ContainerStarted","Data":"278abbe234a99ae7d3fd7712408ef7fdb0486f4826017a922229bd744bed9a2c"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" event={"ID":"a1f443aa-50c0-4865-b6a3-a07d13b71e73","Type":"ContainerStarted","Data":"b14f2a4c9cd7fb735af9bff29aed181fc4521a7bc2ac7c7d8e7924e42122fb4b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.519905 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-b5t5k" podStartSLOduration=122.51989344 podStartE2EDuration="2m2.51989344s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.517454531 +0000 UTC m=+153.344017362" watchObservedRunningTime="2026-01-28 18:15:42.51989344 +0000 UTC m=+153.346456261" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.521544 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerStarted","Data":"1e7f0e57b01f1d7574c6a758c09ab0d8248fafcd79d2a77c1cd5931c1c715640"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.524752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" event={"ID":"50627d4d-8f08-4db3-a8a4-e8b0b94b1b71","Type":"ContainerStarted","Data":"0a1cca030e7898a383fe11062638bfb92a0213efb9d089d5970baf9937a9fc55"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.526439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"e0d46af3685c149a5fcf5dec6a551c09120577182a6d1300402cb740e9ceb3af"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.530976 4985 generic.go:334] "Generic (PLEG): container finished" podID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerID="47e07904cc0955f8b324534c75aef4da5048843872e9f33590c74115e848c24b" exitCode=0 Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.531044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerDied","Data":"47e07904cc0955f8b324534c75aef4da5048843872e9f33590c74115e848c24b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.534062 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.538200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-2lzzr" event={"ID":"ab37c3ff-de29-4cba-8c5b-83d4fdca736c","Type":"ContainerStarted","Data":"0dc1f292c7d223f611f63a9e0459a2a01432e443a858dfa4c18bbd7496a7fff4"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.541063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.542121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"fcada611c3b3fe11486c3124fe3827a048fce22bf393cdde0e55e9fae605803b"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.543418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" event={"ID":"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c","Type":"ContainerStarted","Data":"81cd0e0ccfddd850ed46ee2c16fe85c0d0c6bcf7c2090b607ffd1f44455d8136"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.544838 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"5314169b85f4c91b8842227e9762a819a4bd8e7cc2993af76830dd293d144cdb"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.548235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" event={"ID":"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc","Type":"ContainerStarted","Data":"cc1dc4b5f899165076bd1518a496186f91a05dc16df07043c514bf2001990eea"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.551142 4985 generic.go:334] "Generic (PLEG): container finished" podID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerID="b6ccf435f06be325066da899de8006e2145eae58ef5b8e46d92c0cab3d64ce9d" exitCode=0 Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.551214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerDied","Data":"b6ccf435f06be325066da899de8006e2145eae58ef5b8e46d92c0cab3d64ce9d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.567623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"6b10ed763b169d6f532181c8d5b22f9153351cfca621d39432cf510addeb355d"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.570695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerStarted","Data":"c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"f476b82bfa300d243f1a834e322509e00a075abd07e9dd5cafcffe28352ce983"} Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573502 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.573540 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.575697 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.575742 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.579302 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fdfqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.579365 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.602782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-b8tzt" podStartSLOduration=122.602763315 podStartE2EDuration="2m2.602763315s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.602286432 +0000 UTC m=+153.428849253" watchObservedRunningTime="2026-01-28 18:15:42.602763315 +0000 UTC m=+153.429326136" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.615339 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.615967 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.11594945 +0000 UTC m=+153.942512271 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.642204 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podStartSLOduration=122.642183135 podStartE2EDuration="2m2.642183135s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.641557508 +0000 UTC m=+153.468120319" watchObservedRunningTime="2026-01-28 18:15:42.642183135 +0000 UTC m=+153.468745946" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.703054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podStartSLOduration=122.703024704 podStartE2EDuration="2m2.703024704s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:42.700075031 +0000 UTC m=+153.526637852" watchObservedRunningTime="2026-01-28 18:15:42.703024704 +0000 UTC m=+153.529587525" Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.716533 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.716702 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.216685843 +0000 UTC m=+154.043248664 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.717484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.718322 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.218280608 +0000 UTC m=+154.044843439 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.836269 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.841350 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.341315614 +0000 UTC m=+154.167878625 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:42 crc kubenswrapper[4985]: I0128 18:15:42.945963 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:42 crc kubenswrapper[4985]: E0128 18:15:42.946923 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.446898865 +0000 UTC m=+154.273461686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.049128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.050612 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.550568961 +0000 UTC m=+154.377131782 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.055193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.055669 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.555654015 +0000 UTC m=+154.382216836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.156462 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.157587 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.657565012 +0000 UTC m=+154.484127833 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.259181 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.259824 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.759811697 +0000 UTC m=+154.586374518 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.360609 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.361150 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.861116726 +0000 UTC m=+154.687679547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: W0128 18:15:43.402455 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb WatchSource:0}: Error finding container f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb: Status 404 returned error can't find the container with id f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.462115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.464046 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:43.964033401 +0000 UTC m=+154.790596222 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.567346 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.567471 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.067413639 +0000 UTC m=+154.893976460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.567880 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.568461 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.068444768 +0000 UTC m=+154.895007599 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.598659 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerStarted","Data":"437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.603056 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"f2550561034c8e4a230b7ed2e9f23ce77cc81984f50d283d56fec33e8fc739fb"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.617369 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"9cc1040bc4b4050cbdb18298dfc9be5cbfe8a3a8c66606d5d752ec3f98391b2f"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.619055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.619567 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.620914 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.620955 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.623033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"ff73d967f8fb248341974b5e406a44622b69b6de9e5df338a4adc2449181764b"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.630602 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" event={"ID":"0e4812cb-3dc4-4d34-b24d-fd5f4a507030","Type":"ContainerStarted","Data":"b39419bdde15412964a2e3b95d2b8b203bd3bb7d0354865d148cf0f708038435"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.632640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"ce07fa2ab23a4f3ece8649aaa467e9290f0006aaf0a7b738024af734b6dbeefc"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.635203 4985 generic.go:334] "Generic (PLEG): container finished" podID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerID="feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8" exitCode=0 Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.635290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerDied","Data":"feb43603996825516cfc092bf0fad2145b414d6c0a264d5677b6af0e016c8ef8"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.639031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" event={"ID":"218b57d8-c3a3-4a33-a3ef-6701cf557911","Type":"ContainerStarted","Data":"46ef78aa78108a5a2180e0a31160ecd7bbfc8ab0e641d68cb257650ad6901d56"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.644993 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" podStartSLOduration=43.644946982 podStartE2EDuration="43.644946982s" podCreationTimestamp="2026-01-28 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.626311983 +0000 UTC m=+154.452874824" watchObservedRunningTime="2026-01-28 18:15:43.644946982 +0000 UTC m=+154.471509803" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.646336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"d82ec03a2421dbac9721060c554073a3ffc5995669ac840d112278ea87825a43"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.646387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" event={"ID":"a7ecd4c5-97bd-4190-b474-a745b00d58aa","Type":"ContainerStarted","Data":"2e3181f1a5918f1e10191a34e028952377350610670581a1468cf7388fd18edb"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.649446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" event={"ID":"5691988c-c881-437e-aa60-317e424b3170","Type":"ContainerStarted","Data":"f75673dbae32a425735282fafc61e6dc472bef448e5d322e633bf53e1f982b2d"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.651779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"4bccb1fb1259c25912a8a652d5313efd046c3b3be158159b0b3bf4e137dc501b"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.651812 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" event={"ID":"893bf4c0-7b07-4e49-bff4-9ed7d52b3196","Type":"ContainerStarted","Data":"e0a9377ebc7932896bd107c05096556a55cb6e4df29babf79f8a822b2c002a23"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.652201 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.653975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"045d50ee895655138412a42045d807578fd287fb32fee5d3d7edf4034654b0ff"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.658716 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-fzzsl" podStartSLOduration=123.658695383 podStartE2EDuration="2m3.658695383s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.657555771 +0000 UTC m=+154.484118592" watchObservedRunningTime="2026-01-28 18:15:43.658695383 +0000 UTC m=+154.485258204" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.659821 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podStartSLOduration=123.659813555 podStartE2EDuration="2m3.659813555s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.64207148 +0000 UTC m=+154.468634321" watchObservedRunningTime="2026-01-28 18:15:43.659813555 +0000 UTC m=+154.486376376" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.660476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.660722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.665618 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-g5knd" event={"ID":"97299e5b-e1d8-41b0-b1b2-c5658f42a436","Type":"ContainerStarted","Data":"0dd1582aa5163e30675732fbf375bc84e847b0cf1b41e9dc1a0a941d81828fcf"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.668338 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.669931 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.670187 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.671178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"19b1edc012d998b55e4fde5b82a097fa2178564028adc22c43436a3488ef2d92"} Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.671333 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.171309961 +0000 UTC m=+154.997872792 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.672613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"45002a6a2c7138d9b42aef2ed0bd03e5dd1f62156eb66981aa82bf8098a68b3a"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.676165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" event={"ID":"7f89cfdf-2a4d-4582-94f4-e53c45c3e09c","Type":"ContainerStarted","Data":"b91f562174ffab8488433ee9f5d4dbeb69c2bd5a5a2200d215b875a40eae0c2e"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.685902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" event={"ID":"0953ef82-fce5-4008-85c8-b1377a8f66a2","Type":"ContainerStarted","Data":"f5e186a2088ec4f860d3b8cb51c2f4190f8a2eabf5677599790b35a7acf2f350"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.697138 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hjjf7" podStartSLOduration=123.697085344 podStartE2EDuration="2m3.697085344s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.693570484 +0000 UTC m=+154.520133315" watchObservedRunningTime="2026-01-28 18:15:43.697085344 +0000 UTC m=+154.523648165" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.704476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"2a9f81657487a25f347bd15085f723ec9c4d54b203cce27b61d1672aa094702f"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.736736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.737449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.739609 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.739670 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.749615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.751394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.756971 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.757242 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.760629 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-cq5bj" podStartSLOduration=123.760599609 podStartE2EDuration="2m3.760599609s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.755226336 +0000 UTC m=+154.581789157" watchObservedRunningTime="2026-01-28 18:15:43.760599609 +0000 UTC m=+154.587162430" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.771469 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.271455867 +0000 UTC m=+155.098018688 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.771137 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.764078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" event={"ID":"f0e8632e-effa-4fe6-ac4d-8c33abe6eecc","Type":"ContainerStarted","Data":"232dc9c5ae53b35e6fec0e884895b5f8becaf78b490dcf66e0050a584b979043"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.794439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" event={"ID":"07d9a024-6342-42ba-8a0b-4db3aa777a82","Type":"ContainerStarted","Data":"53ba5ca3f8b3acb1f5e25c0476efc9564c22718b5e9b28fb5ad08e152e9984a9"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.824213 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-g5knd" podStartSLOduration=6.824164785 podStartE2EDuration="6.824164785s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.792575988 +0000 UTC m=+154.619138819" watchObservedRunningTime="2026-01-28 18:15:43.824164785 +0000 UTC m=+154.650727606" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.829962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerStarted","Data":"f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.831183 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.834114 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-8fcwv" podStartSLOduration=123.834087657 podStartE2EDuration="2m3.834087657s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.82959542 +0000 UTC m=+154.656158241" watchObservedRunningTime="2026-01-28 18:15:43.834087657 +0000 UTC m=+154.660650668" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.865805 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.865877 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.875223 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.875616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"08ad21707accc3f834748fd9d507769b137d814002f52158f67638eaab59faa3"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.876453 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:43 crc kubenswrapper[4985]: E0128 18:15:43.876769 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.376753 +0000 UTC m=+155.203315821 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.877322 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podStartSLOduration=123.877308766 podStartE2EDuration="2m3.877308766s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.875850584 +0000 UTC m=+154.702413415" watchObservedRunningTime="2026-01-28 18:15:43.877308766 +0000 UTC m=+154.703871587" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.890398 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"9852d8ac758b7698e1f7ea6bc02cb4d86b83e3ec735ef920e1d541945c84e9e5"} Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893701 4985 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-fdfqq container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893745 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.11:6443/healthz\": dial tcp 10.217.0.11:6443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893782 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893863 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.893958 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.903397 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.903461 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.904383 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.904448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.921880 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9l594" podStartSLOduration=123.921849491 podStartE2EDuration="2m3.921849491s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.908620085 +0000 UTC m=+154.735182896" watchObservedRunningTime="2026-01-28 18:15:43.921849491 +0000 UTC m=+154.748412312" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.972768 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podStartSLOduration=123.972735978 podStartE2EDuration="2m3.972735978s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.937744433 +0000 UTC m=+154.764307254" watchObservedRunningTime="2026-01-28 18:15:43.972735978 +0000 UTC m=+154.799298799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.973480 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-wp27s" podStartSLOduration=123.973469978 podStartE2EDuration="2m3.973469978s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:43.968526868 +0000 UTC m=+154.795089689" watchObservedRunningTime="2026-01-28 18:15:43.973469978 +0000 UTC m=+154.800032799" Jan 28 18:15:43 crc kubenswrapper[4985]: I0128 18:15:43.977215 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.003810 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-7gnfx" podStartSLOduration=124.00378414 podStartE2EDuration="2m4.00378414s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.001142665 +0000 UTC m=+154.827705496" watchObservedRunningTime="2026-01-28 18:15:44.00378414 +0000 UTC m=+154.830346961" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.007849 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.507829915 +0000 UTC m=+155.334392736 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.042929 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-qnrsp" podStartSLOduration=124.042906332 podStartE2EDuration="2m4.042906332s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.041486981 +0000 UTC m=+154.868049802" watchObservedRunningTime="2026-01-28 18:15:44.042906332 +0000 UTC m=+154.869469153" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.078822 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.079223 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.579206103 +0000 UTC m=+155.405768924 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.107636 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podStartSLOduration=124.10760767 podStartE2EDuration="2m4.10760767s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.096542136 +0000 UTC m=+154.923104947" watchObservedRunningTime="2026-01-28 18:15:44.10760767 +0000 UTC m=+154.934170491" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.119678 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podStartSLOduration=124.119656093 podStartE2EDuration="2m4.119656093s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.119404626 +0000 UTC m=+154.945967467" watchObservedRunningTime="2026-01-28 18:15:44.119656093 +0000 UTC m=+154.946218914" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.151102 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-z9cdk" podStartSLOduration=124.151077226 podStartE2EDuration="2m4.151077226s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.149429859 +0000 UTC m=+154.975992680" watchObservedRunningTime="2026-01-28 18:15:44.151077226 +0000 UTC m=+154.977640047" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.181058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.181691 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.681674155 +0000 UTC m=+155.508236976 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.203804 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vgvlm" podStartSLOduration=124.203782923 podStartE2EDuration="2m4.203782923s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.199562343 +0000 UTC m=+155.026125174" watchObservedRunningTime="2026-01-28 18:15:44.203782923 +0000 UTC m=+155.030345744" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.258066 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-x6vjm" podStartSLOduration=124.258042465 podStartE2EDuration="2m4.258042465s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.255531514 +0000 UTC m=+155.082094335" watchObservedRunningTime="2026-01-28 18:15:44.258042465 +0000 UTC m=+155.084605286" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.284823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.285222 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.785206307 +0000 UTC m=+155.611769128 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.344927 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-j6799" podStartSLOduration=124.344908884 podStartE2EDuration="2m4.344908884s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.302571071 +0000 UTC m=+155.129133892" watchObservedRunningTime="2026-01-28 18:15:44.344908884 +0000 UTC m=+155.171471705" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.345293 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-k96zr" podStartSLOduration=124.345290475 podStartE2EDuration="2m4.345290475s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.342682601 +0000 UTC m=+155.169245422" watchObservedRunningTime="2026-01-28 18:15:44.345290475 +0000 UTC m=+155.171853296" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.376182 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-fn9d5" podStartSLOduration=7.376153862 podStartE2EDuration="7.376153862s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.372847008 +0000 UTC m=+155.199409829" watchObservedRunningTime="2026-01-28 18:15:44.376153862 +0000 UTC m=+155.202716683" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.387048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.387696 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.887677799 +0000 UTC m=+155.714240620 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.398080 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-2lzzr" podStartSLOduration=7.398051274 podStartE2EDuration="7.398051274s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.39086082 +0000 UTC m=+155.217423651" watchObservedRunningTime="2026-01-28 18:15:44.398051274 +0000 UTC m=+155.224614095" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.448614 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hk2lj" podStartSLOduration=124.44858874 podStartE2EDuration="2m4.44858874s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.443891197 +0000 UTC m=+155.270454028" watchObservedRunningTime="2026-01-28 18:15:44.44858874 +0000 UTC m=+155.275151561" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.450707 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-4tdfc" podStartSLOduration=124.45069867 podStartE2EDuration="2m4.45069867s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.422439957 +0000 UTC m=+155.249002778" watchObservedRunningTime="2026-01-28 18:15:44.45069867 +0000 UTC m=+155.277261491" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.489815 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.490212 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:44.990194433 +0000 UTC m=+155.816757244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.498912 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podStartSLOduration=124.49888737 podStartE2EDuration="2m4.49888737s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.498464208 +0000 UTC m=+155.325027039" watchObservedRunningTime="2026-01-28 18:15:44.49888737 +0000 UTC m=+155.325450191" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.524352 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-hpz9q" podStartSLOduration=124.524329703 podStartE2EDuration="2m4.524329703s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.52389106 +0000 UTC m=+155.350453881" watchObservedRunningTime="2026-01-28 18:15:44.524329703 +0000 UTC m=+155.350892534" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.591779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.592234 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.092218502 +0000 UTC m=+155.918781323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.617267 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.618486 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.618535 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.693033 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.693600 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.193563752 +0000 UTC m=+156.020126573 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.795014 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.795476 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.295456118 +0000 UTC m=+156.122018939 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.896157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:44 crc kubenswrapper[4985]: E0128 18:15:44.896609 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.396595022 +0000 UTC m=+156.223157843 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.898085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"cfde496ca4baaeceb8a817a29a2696a5661461cf557694ecd9171c6c50943829"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.901128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" event={"ID":"bf0cd343-6643-4463-bb9b-6e291a601901","Type":"ContainerStarted","Data":"f982907c80716b41b2268550bbb2daa5e64386dd6432f8608c172c4226928c37"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.903580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" event={"ID":"3c194d09-8a64-45a1-b40b-d1ea249b2626","Type":"ContainerStarted","Data":"37e117138c941f0cebea21f7f5b8b3c3deec93036ed86c7b058c4b0b27ff8bc6"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.906733 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"05f111c74c9500c86fafc215a536173dfc3b7fa58cd6b2b982164a7fd7c3d8ea"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.906910 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.908991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-fn9d5" event={"ID":"365a9e45-74e9-4231-8ccf-c5fbf200ab83","Type":"ContainerStarted","Data":"d2c9f47132c3973975eadd76bbe8f6211b7751c6743abf694805a05f9404eacb"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.911040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" event={"ID":"45774b89-be22-4692-a944-e5f12f898ea6","Type":"ContainerStarted","Data":"1b56c61f29869e6d115cb72d69bb76bf27b5b3ec3c86dee45d55a082afb8edfe"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.912518 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"c23e9ca62fecc1eaeec6e46012eb54880b96d777b6c5e6f65e1279af6067c6ed"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.915436 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" event={"ID":"ebf5f82e-2a14-49d9-b670-59ed73e71203","Type":"ContainerStarted","Data":"632767b61ba7c6fe31c83bd2e9588921f2a06fd37cdf52d071e99c26ec9f8357"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.918117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.918556 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.920271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" event={"ID":"d3e3ff22-4547-453f-bd6a-bf8d4098f3a3","Type":"ContainerStarted","Data":"970030d2427f110d447404f6fef91f4110a1a65dcf9b743c75b91570cf0933d3"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.922113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" event={"ID":"c731b198-314f-46a9-ad13-a4cc6c7bab94","Type":"ContainerStarted","Data":"e7798f4962eade42046a64293003b8e80cca5c5b2a0672f8d559d427a29ec3d0"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.924647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" event={"ID":"a3b95c03-1b0d-4c06-bb85-2f9ed127737b","Type":"ContainerStarted","Data":"961aaa261c9d6ac69a1bf08ecd14fc941c76adcc2cce7c9fd3a34201dd2adc4f"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.936843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" event={"ID":"fa6948a7-6763-4c03-b6f9-ecfb38a8a064","Type":"ContainerStarted","Data":"65fdc16f968f16491c13b13b383d45b1496d97698761eb8019fd722bab5c5e95"} Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937279 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937331 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937722 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.937747 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938071 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938108 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938393 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938558 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938463 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.938776 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939046 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939074 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939484 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.939512 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.971748 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podStartSLOduration=124.971728297 podStartE2EDuration="2m4.971728297s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.969290788 +0000 UTC m=+155.795853619" watchObservedRunningTime="2026-01-28 18:15:44.971728297 +0000 UTC m=+155.798291118" Jan 28 18:15:44 crc kubenswrapper[4985]: I0128 18:15:44.996723 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-6qh9r" podStartSLOduration=124.996699987 podStartE2EDuration="2m4.996699987s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:44.996188702 +0000 UTC m=+155.822751523" watchObservedRunningTime="2026-01-28 18:15:44.996699987 +0000 UTC m=+155.823262808" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:44.998910 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.002171 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.502155842 +0000 UTC m=+156.328718673 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.039166 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-bmvks" podStartSLOduration=125.039130843 podStartE2EDuration="2m5.039130843s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.029750346 +0000 UTC m=+155.856313167" watchObservedRunningTime="2026-01-28 18:15:45.039130843 +0000 UTC m=+155.865693664" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.070153 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-cbfgv" podStartSLOduration=125.070134094 podStartE2EDuration="2m5.070134094s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.068535448 +0000 UTC m=+155.895098289" watchObservedRunningTime="2026-01-28 18:15:45.070134094 +0000 UTC m=+155.896696925" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.112614 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.112895 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.612881489 +0000 UTC m=+156.439444310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.164101 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-k5vgf" podStartSLOduration=125.164075533 podStartE2EDuration="2m5.164075533s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.111631793 +0000 UTC m=+155.938194614" watchObservedRunningTime="2026-01-28 18:15:45.164075533 +0000 UTC m=+155.990638344" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.205866 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6ndmg" podStartSLOduration=125.20584169 podStartE2EDuration="2m5.20584169s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.197772771 +0000 UTC m=+156.024335592" watchObservedRunningTime="2026-01-28 18:15:45.20584169 +0000 UTC m=+156.032404531" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.213963 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.214450 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.714431034 +0000 UTC m=+156.540993855 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.249041 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podStartSLOduration=125.249014147 podStartE2EDuration="2m5.249014147s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.247618968 +0000 UTC m=+156.074181789" watchObservedRunningTime="2026-01-28 18:15:45.249014147 +0000 UTC m=+156.075576968" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.294935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podStartSLOduration=126.294902801 podStartE2EDuration="2m6.294902801s" podCreationTimestamp="2026-01-28 18:13:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.289977431 +0000 UTC m=+156.116540262" watchObservedRunningTime="2026-01-28 18:15:45.294902801 +0000 UTC m=+156.121465622" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.315280 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.315449 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.815424674 +0000 UTC m=+156.641987505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.315558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.315917 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.815906508 +0000 UTC m=+156.642469339 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.351730 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hkl" podStartSLOduration=125.351696245 podStartE2EDuration="2m5.351696245s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:45.346539669 +0000 UTC m=+156.173102510" watchObservedRunningTime="2026-01-28 18:15:45.351696245 +0000 UTC m=+156.178259066" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.417045 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.417215 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.917191096 +0000 UTC m=+156.743753927 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.417330 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.417697 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:45.917686711 +0000 UTC m=+156.744249532 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.518397 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.518623 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.018586428 +0000 UTC m=+156.845149249 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.518977 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.519376 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.01936125 +0000 UTC m=+156.845924071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.620344 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.620523 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.120481884 +0000 UTC m=+156.947044705 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.620594 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.621138 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.121114432 +0000 UTC m=+156.947677323 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.625845 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:45 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:45 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:45 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.625914 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.721735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.721991 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.221962178 +0000 UTC m=+157.048524989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.722045 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.722436 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.222421781 +0000 UTC m=+157.048984602 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.823389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.823579 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.323549364 +0000 UTC m=+157.150112185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.823911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.824341 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.324332106 +0000 UTC m=+157.150894927 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.925732 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.925964 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.425934293 +0000 UTC m=+157.252497114 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.926277 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:45 crc kubenswrapper[4985]: E0128 18:15:45.926621 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.426613162 +0000 UTC m=+157.253175983 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944848 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-b5wzm container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.944889 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.945379 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 18:15:45 crc kubenswrapper[4985]: I0128 18:15:45.945421 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.027381 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.028617 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.52857708 +0000 UTC m=+157.355139891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.129559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.130033 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.630012763 +0000 UTC m=+157.456575584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.230941 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.231120 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.731100985 +0000 UTC m=+157.557663796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.231216 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.231573 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.731563699 +0000 UTC m=+157.558126520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.332042 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.332339 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.832298221 +0000 UTC m=+157.658861042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.332384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.332690 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.832676622 +0000 UTC m=+157.659239443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.433428 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.433568 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.933545639 +0000 UTC m=+157.760108460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.433688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.434052 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:46.934043873 +0000 UTC m=+157.760606694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.434177 4985 csr.go:261] certificate signing request csr-hfr5g is approved, waiting to be issued Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.448547 4985 csr.go:257] certificate signing request csr-hfr5g is issued Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.534585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.534807 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.034776605 +0000 UTC m=+157.861339426 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.534873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.535265 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.035236468 +0000 UTC m=+157.861799279 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.621401 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:46 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:46 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:46 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.621470 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.635644 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.635885 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.135841387 +0000 UTC m=+157.962404208 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.635976 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.636394 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.136386393 +0000 UTC m=+157.962949214 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.737493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.737692 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.237662371 +0000 UTC m=+158.064225202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.737840 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.738215 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.238197186 +0000 UTC m=+158.064760007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.839365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.839874 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.339845245 +0000 UTC m=+158.166408066 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:46 crc kubenswrapper[4985]: I0128 18:15:46.941496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:46 crc kubenswrapper[4985]: E0128 18:15:46.941894 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.441880205 +0000 UTC m=+158.268443026 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.042649 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.042936 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.542888725 +0000 UTC m=+158.369451556 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.043128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.043573 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.543558164 +0000 UTC m=+158.370120985 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.144379 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.144815 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.644800191 +0000 UTC m=+158.471363012 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.246282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.246661 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.746644425 +0000 UTC m=+158.573207246 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.347365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.347791 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.847768819 +0000 UTC m=+158.674331640 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.352395 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.353545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.356853 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.449635 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:47.949603093 +0000 UTC m=+158.776165914 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449663 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 18:10:46 +0000 UTC, rotation deadline is 2026-11-03 02:00:33.068015721 +0000 UTC Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449721 4985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6679h44m45.618297975s for next certificate rotation Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.449903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.473923 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.540955 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.542162 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.551836 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.551865 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.552193 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.052172198 +0000 UTC m=+158.878735019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.552934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.578862 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.600047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"certified-operators-58qq5\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.623786 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:47 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:47 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:47 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.625421 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.653718 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654088 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.654352 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.654493 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.154472545 +0000 UTC m=+158.981035366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.658394 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.659066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.662370 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.662675 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.667033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.684092 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.745846 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.747134 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.755965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756192 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756282 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.756392 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.256354901 +0000 UTC m=+159.082917872 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.756773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.757334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.757638 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.257625597 +0000 UTC m=+159.084188418 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.759038 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.759164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.795011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"community-operators-nbllw\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.824332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.856840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860396 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860869 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860939 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.860993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861117 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.861692 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.361667063 +0000 UTC m=+159.188229884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.861778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.940093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.960497 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964102 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.964284 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: E0128 18:15:47.964930 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.464908167 +0000 UTC m=+159.291470988 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.965238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.966122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.974488 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:47 crc kubenswrapper[4985]: I0128 18:15:47.979060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.032579 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.042773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"806885dc798ad388908373bc69cdee91b5601deeb01836e72ab0bfaaa4c37352"} Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066922 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.066956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.067012 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.067184 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.567167273 +0000 UTC m=+159.393730094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.070696 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"certified-operators-ngcsk\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.085633 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173730 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173754 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.173773 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.174222 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.174460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.175050 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.675033919 +0000 UTC m=+159.501596750 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.233980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"community-operators-tkbjb\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.274699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.275095 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.775071602 +0000 UTC m=+159.601634423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.318791 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.351597 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.378364 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.378805 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.878790339 +0000 UTC m=+159.705353160 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.399456 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.484832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.486476 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:48.986458209 +0000 UTC m=+159.813021030 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.595961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.596389 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.096372732 +0000 UTC m=+159.922935553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.626518 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.634448 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:48 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:48 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:48 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.634536 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.698894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.699325 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.199308078 +0000 UTC m=+160.025870899 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.803149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.803593 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.303575661 +0000 UTC m=+160.130138482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.909197 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:48 crc kubenswrapper[4985]: E0128 18:15:48.909601 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.409577403 +0000 UTC m=+160.236140224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.927438 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.964223 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.964797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.970775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:15:48 crc kubenswrapper[4985]: I0128 18:15:48.999539 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.013335 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.013689 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.513677011 +0000 UTC m=+160.340239832 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.034199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.094367 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.125442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerStarted","Data":"03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.131335 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.133018 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.632994412 +0000 UTC m=+160.459557233 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170199 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e" exitCode=0 Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.170341 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerStarted","Data":"29cf66044b42b3771161b4b736214738baedd3db9a4eab25aec806dff09290a6"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.190217 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.200909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"fee5ad9c634324fb795c0ec18b20b982cec13ce8646e5a41d3259fd33ab8724c"} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.218799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.225374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.234835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.235403 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.735386492 +0000 UTC m=+160.561949313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.235866 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.235899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.244830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.253506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.255733 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.255787 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.337381 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.337635 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.837586875 +0000 UTC m=+160.664149696 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.338123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.341203 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.841180988 +0000 UTC m=+160.667743809 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.348199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.418983 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419044 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419315 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.419397 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.438912 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.440499 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:49.94048264 +0000 UTC m=+160.767045461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.539854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.541294 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.542105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.542541 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.04252979 +0000 UTC m=+160.869092611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.560801 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.563584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.628778 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:49 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.628848 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.645900 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646205 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.646342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.646471 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.146449373 +0000 UTC m=+160.973012194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747318 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747850 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.747897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.749322 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.749676 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.249660456 +0000 UTC m=+161.076223277 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.749911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.782737 4985 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.783513 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"redhat-marketplace-mkflh\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.851898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.852470 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.352453147 +0000 UTC m=+161.179015968 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.927626 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]log ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]etcd ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/max-in-flight-filter ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 18:15:49 crc kubenswrapper[4985]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-startinformers ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 18:15:49 crc kubenswrapper[4985]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 18:15:49 crc kubenswrapper[4985]: livez check failed Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.927698 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.928002 4985 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T18:15:49.782777017Z","Handler":null,"Name":""} Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.928447 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.929633 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.942703 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.944175 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.944916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 18:15:49 crc kubenswrapper[4985]: I0128 18:15:49.954312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:49 crc kubenswrapper[4985]: E0128 18:15:49.954774 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 18:15:50.454759534 +0000 UTC m=+161.281322355 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-4k6qp" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.005599 4985 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.005636 4985 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.056923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.065931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.159889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.160171 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.200588 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"redhat-marketplace-vq448\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.216709 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.216971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222073 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.222462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerStarted","Data":"7de4f851d6fd3b3bdf2435ffb6090fbd2d50bbda34ffd7c0a08f88549a7af86b"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.233070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerStarted","Data":"b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.253228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261281 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71" exitCode=0 Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.261456 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerStarted","Data":"443d55c2efdfe0f8e6f7fa0e88bf057b626e08f470a93af561b93e9387fb0988"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.282399 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.282444 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"670c758c6e0b4d061db4a1652fe94536b8c4f9f8219d2776bceabf3e6e3134da"} Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.315193 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.315280 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.344704 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.344687856 podStartE2EDuration="3.344687856s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:50.343272035 +0000 UTC m=+161.169834856" watchObservedRunningTime="2026-01-28 18:15:50.344687856 +0000 UTC m=+161.171250677" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.345191 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.370421 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:15:50 crc kubenswrapper[4985]: W0128 18:15:50.388889 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd797afdd_19c6_45ed_81c8_5fa31175e121.slice/crio-b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c WatchSource:0}: Error finding container b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c: Status 404 returned error can't find the container with id b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.430945 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podStartSLOduration=13.430911726 podStartE2EDuration="13.430911726s" podCreationTimestamp="2026-01-28 18:15:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:50.397098745 +0000 UTC m=+161.223661566" watchObservedRunningTime="2026-01-28 18:15:50.430911726 +0000 UTC m=+161.257474547" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.483899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-4k6qp\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.529920 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.531051 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.534986 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.546312 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.581932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.600480 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.620652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.628659 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:50 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:50 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:50 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.628739 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.644605 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.669865 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.685607 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.687272 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.788795 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.789741 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.789990 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.822499 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"redhat-operators-zcwgk\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.887646 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.936583 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.938054 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:50 crc kubenswrapper[4985]: I0128 18:15:50.961070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.021696 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.024047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.036371 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.036682 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.065272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096190 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096282 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.096357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.198510 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.198562 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.199138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.200238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.203974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.224492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.226394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"redhat-operators-2zfzc\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.273466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.286438 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305343 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.305550 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.314234 4985 generic.go:334] "Generic (PLEG): container finished" podID="5593b8be-de94-4ed3-81cb-449457767772" containerID="b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.314528 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerDied","Data":"b8a8d74d6582f05ce9c27631887a13eeb3a1cc783db0fd72172a5370c7d0843d"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317649 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.317713 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerStarted","Data":"b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.320590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerStarted","Data":"718f56cadfa73ec9c883cb72f3a4ad761b62779dbd38dd0559a00a1f1b0a3abc"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323047 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf" exitCode=0 Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.323714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerStarted","Data":"4227c1ef4517986db5b63f69f417525b1efc3dddfa056b58023dfaf2602681c9"} Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.326862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.331189 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:15:51 crc kubenswrapper[4985]: W0128 18:15:51.362351 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf17410ee_fc07_4e6c_8262_d3dad9ca4a5d.slice/crio-2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d WatchSource:0}: Error finding container 2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d: Status 404 returned error can't find the container with id 2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.373088 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.614488 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.625595 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:51 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:51 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:51 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.625662 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:51 crc kubenswrapper[4985]: W0128 18:15:51.662831 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478dee72_717a_448e_b14d_15d600c82eb5.slice/crio-687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f WatchSource:0}: Error finding container 687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f: Status 404 returned error can't find the container with id 687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f Jan 28 18:15:51 crc kubenswrapper[4985]: I0128 18:15:51.927554 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.334421 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerStarted","Data":"a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339375 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23" exitCode=0 Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339497 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.339637 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.344794 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b" exitCode=0 Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.346442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.346583 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.353120 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerStarted","Data":"2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829"} Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.353562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.397714 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" podStartSLOduration=132.397667227 podStartE2EDuration="2m12.397667227s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:15:52.390680989 +0000 UTC m=+163.217243810" watchObservedRunningTime="2026-01-28 18:15:52.397667227 +0000 UTC m=+163.224230048" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.610628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-fn9d5" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.618799 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:52 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:52 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:52 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.618869 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.823370 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.951456 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") pod \"5593b8be-de94-4ed3-81cb-449457767772\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.951721 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") pod \"5593b8be-de94-4ed3-81cb-449457767772\" (UID: \"5593b8be-de94-4ed3-81cb-449457767772\") " Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.952223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "5593b8be-de94-4ed3-81cb-449457767772" (UID: "5593b8be-de94-4ed3-81cb-449457767772"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:15:52 crc kubenswrapper[4985]: I0128 18:15:52.977941 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "5593b8be-de94-4ed3-81cb-449457767772" (UID: "5593b8be-de94-4ed3-81cb-449457767772"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.053781 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5593b8be-de94-4ed3-81cb-449457767772-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.053837 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5593b8be-de94-4ed3-81cb-449457767772-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.404172 4985 generic.go:334] "Generic (PLEG): container finished" podID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerID="c0b6373de32d25637f399a6feae262091a19d13a816cfb3455bbb1c28479e246" exitCode=0 Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.404284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerDied","Data":"c0b6373de32d25637f399a6feae262091a19d13a816cfb3455bbb1c28479e246"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445061 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"5593b8be-de94-4ed3-81cb-449457767772","Type":"ContainerDied","Data":"03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445119 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03a87c1436d3238d93dfc27faef0f425b055e15e52fd95499db1893c39fae51c" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.445236 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.451106 4985 generic.go:334] "Generic (PLEG): container finished" podID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerID="437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c" exitCode=0 Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.451866 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerDied","Data":"437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c"} Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.640787 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:53 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:53 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:53 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:53 crc kubenswrapper[4985]: I0128 18:15:53.640859 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.242123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.248217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.617770 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:54 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:54 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:54 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.617871 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.837504 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.892949 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") pod \"a7c01a9f-20e3-411e-b7da-d21be45aba82\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893074 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a7c01a9f-20e3-411e-b7da-d21be45aba82" (UID: "a7c01a9f-20e3-411e-b7da-d21be45aba82"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893196 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") pod \"a7c01a9f-20e3-411e-b7da-d21be45aba82\" (UID: \"a7c01a9f-20e3-411e-b7da-d21be45aba82\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.893567 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a7c01a9f-20e3-411e-b7da-d21be45aba82-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.902455 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a7c01a9f-20e3-411e-b7da-d21be45aba82" (UID: "a7c01a9f-20e3-411e-b7da-d21be45aba82"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.904745 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.994502 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995297 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") pod \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\" (UID: \"1030ed14-9fc1-4ec9-a93c-13eab69320ae\") " Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.995865 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a7c01a9f-20e3-411e-b7da-d21be45aba82-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:54 crc kubenswrapper[4985]: I0128 18:15:54.996445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.001456 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88" (OuterVolumeSpecName: "kube-api-access-p2d88") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "kube-api-access-p2d88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.001819 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1030ed14-9fc1-4ec9-a93c-13eab69320ae" (UID: "1030ed14-9fc1-4ec9-a93c-13eab69320ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.096997 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2d88\" (UniqueName: \"kubernetes.io/projected/1030ed14-9fc1-4ec9-a93c-13eab69320ae-kube-api-access-p2d88\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.097038 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1030ed14-9fc1-4ec9-a93c-13eab69320ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.097048 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1030ed14-9fc1-4ec9-a93c-13eab69320ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.525612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"a7c01a9f-20e3-411e-b7da-d21be45aba82","Type":"ContainerDied","Data":"a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4"} Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.526510 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.526891 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a12184f6c2a48cfdc9dbfa4c6e29637c2b0a033211e9e57f5e3cd9fc0e34bfa4" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" event={"ID":"1030ed14-9fc1-4ec9-a93c-13eab69320ae","Type":"ContainerDied","Data":"8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd"} Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535652 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.535666 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f93ab89ce3c6adab00c97ddb3618e2ccd297812e80918e595461de298f590fd" Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.618946 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:55 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:55 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:55 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:55 crc kubenswrapper[4985]: I0128 18:15:55.619114 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:56 crc kubenswrapper[4985]: I0128 18:15:56.620287 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:56 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:56 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:56 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:56 crc kubenswrapper[4985]: I0128 18:15:56.620373 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:57 crc kubenswrapper[4985]: I0128 18:15:57.619007 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:57 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:57 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:57 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:57 crc kubenswrapper[4985]: I0128 18:15:57.619419 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:58 crc kubenswrapper[4985]: I0128 18:15:58.618935 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:58 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:58 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:58 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:58 crc kubenswrapper[4985]: I0128 18:15:58.619025 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.217882 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.217966 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417667 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417761 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417666 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.417829 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.657656 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 18:15:59 crc kubenswrapper[4985]: [-]has-synced failed: reason withheld Jan 28 18:15:59 crc kubenswrapper[4985]: [+]process-running ok Jan 28 18:15:59 crc kubenswrapper[4985]: healthz check failed Jan 28 18:15:59 crc kubenswrapper[4985]: I0128 18:15:59.657738 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 18:16:00 crc kubenswrapper[4985]: I0128 18:16:00.618596 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:16:00 crc kubenswrapper[4985]: I0128 18:16:00.621235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.758034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.764807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0-metrics-certs\") pod \"network-metrics-daemon-hrd6k\" (UID: \"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0\") " pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:02 crc kubenswrapper[4985]: I0128 18:16:02.985545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-hrd6k" Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.580802 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.581051 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" containerID="cri-o://c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" gracePeriod=30 Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.593288 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:05 crc kubenswrapper[4985]: I0128 18:16:05.593531 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" containerID="cri-o://d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" gracePeriod=30 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.653658 4985 generic.go:334] "Generic (PLEG): container finished" podID="81ef78af-dc11-4231-9693-eb088718d103" containerID="c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" exitCode=0 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.653775 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerDied","Data":"c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76"} Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.656046 4985 generic.go:334] "Generic (PLEG): container finished" podID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerID="d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" exitCode=0 Jan 28 18:16:06 crc kubenswrapper[4985]: I0128 18:16:06.656085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerDied","Data":"d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548"} Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.228406 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.235520 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.418654 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.418749 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420120 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420197 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.420270 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421068 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421148 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421328 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} pod="openshift-console/downloads-7954f5f757-hpz9q" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.421454 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" containerID="cri-o://996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b" gracePeriod=2 Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.975799 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:09 crc kubenswrapper[4985]: I0128 18:16:09.975884 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.315826 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.315980 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.683164 4985 generic.go:334] "Generic (PLEG): container finished" podID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerID="996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b" exitCode=0 Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.683226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerDied","Data":"996f5a4f85f66ed4a659b1f3b977d305f1391958d42cde202ba973eed4ede77b"} Jan 28 18:16:10 crc kubenswrapper[4985]: I0128 18:16:10.692562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:16:11 crc kubenswrapper[4985]: I0128 18:16:11.189539 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:16:11 crc kubenswrapper[4985]: I0128 18:16:11.189897 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.419190 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.420060 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.976721 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:19 crc kubenswrapper[4985]: I0128 18:16:19.977232 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.315547 4985 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-52cvd container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.315655 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:20 crc kubenswrapper[4985]: I0128 18:16:20.556058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" Jan 28 18:16:22 crc kubenswrapper[4985]: I0128 18:16:22.498503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.213337 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.213878 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rzrfx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-nbllw_openshift-marketplace(b3c2ecc0-c6a6-468b-bdcf-e84c2831a580): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:24 crc kubenswrapper[4985]: E0128 18:16:24.215111 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.367869 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368541 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368563 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368584 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368598 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.368625 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368638 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368852 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="5593b8be-de94-4ed3-81cb-449457767772" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.368888 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7c01a9f-20e3-411e-b7da-d21be45aba82" containerName="pruner" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.370149 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" containerName="collect-profiles" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.371104 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.379324 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.381188 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.381575 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.399887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.467799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.467891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.471543 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501062 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:28 crc kubenswrapper[4985]: E0128 18:16:28.501370 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501385 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501504 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="81ef78af-dc11-4231-9693-eb088718d103" containerName="controller-manager" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.501893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.518450 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.569619 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.569998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570200 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570543 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") pod \"81ef78af-dc11-4231-9693-eb088718d103\" (UID: \"81ef78af-dc11-4231-9693-eb088718d103\") " Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.570864 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca" (OuterVolumeSpecName: "client-ca") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571105 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config" (OuterVolumeSpecName: "config") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571828 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571840 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571854 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/81ef78af-dc11-4231-9693-eb088718d103-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.571904 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.587606 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.589766 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm" (OuterVolumeSpecName: "kube-api-access-rfnlm") pod "81ef78af-dc11-4231-9693-eb088718d103" (UID: "81ef78af-dc11-4231-9693-eb088718d103"). InnerVolumeSpecName "kube-api-access-rfnlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.589871 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.673729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674877 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rfnlm\" (UniqueName: \"kubernetes.io/projected/81ef78af-dc11-4231-9693-eb088718d103-kube-api-access-rfnlm\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.674992 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/81ef78af-dc11-4231-9693-eb088718d103-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.675315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.677642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.680236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.693224 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.767240 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" event={"ID":"81ef78af-dc11-4231-9693-eb088718d103","Type":"ContainerDied","Data":"6aa4b8f2068d7c22817241bf474ef76faf5c50ef5705a0334899bfa519f7cac2"} Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806364 4985 scope.go:117] "RemoveContainer" containerID="c6ab429d720c37e702d53f4e9a0f44ef39cfc027fff063215df4736dace96d76" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.806378 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-52cvd" Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.852081 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:28 crc kubenswrapper[4985]: I0128 18:16:28.855030 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-52cvd"] Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.247015 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"controller-manager-5869bdf574-ch68d\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.276805 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81ef78af-dc11-4231-9693-eb088718d103" path="/var/lib/kubelet/pods/81ef78af-dc11-4231-9693-eb088718d103/volumes" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.417927 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.418026 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.441736 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.976289 4985 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-xqdzz container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:16:29 crc kubenswrapper[4985]: I0128 18:16:29.976383 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.161028 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.162982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.209486 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335558 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.335654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437035 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437199 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437296 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.437413 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.458054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"installer-9-crc\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:33 crc kubenswrapper[4985]: I0128 18:16:33.514546 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.697217 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.740930 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:36 crc kubenswrapper[4985]: E0128 18:16:36.741239 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.741280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.741433 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" containerName="route-controller-manager" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.742084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.752557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788147 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.788943 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.789067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.852117 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" event={"ID":"44d556c9-6c8e-45d3-bec8-303081e8c4e1","Type":"ContainerDied","Data":"0e823a46854aa252fe9015e01e9cddb6f75ae7ba4ce62f7d7338ee347ff378f1"} Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.852430 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.889829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.890473 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.890854 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891147 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") pod \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\" (UID: \"44d556c9-6c8e-45d3-bec8-303081e8c4e1\") " Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891615 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca" (OuterVolumeSpecName: "client-ca") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891835 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.891952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config" (OuterVolumeSpecName: "config") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.892426 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.894275 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.895117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.897774 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.897948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q" (OuterVolumeSpecName: "kube-api-access-d6t9q") pod "44d556c9-6c8e-45d3-bec8-303081e8c4e1" (UID: "44d556c9-6c8e-45d3-bec8-303081e8c4e1"). InnerVolumeSpecName "kube-api-access-d6t9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.898063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.918119 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"route-controller-manager-76d5df6584-ppscc\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994344 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44d556c9-6c8e-45d3-bec8-303081e8c4e1-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994388 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6t9q\" (UniqueName: \"kubernetes.io/projected/44d556c9-6c8e-45d3-bec8-303081e8c4e1-kube-api-access-d6t9q\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:36 crc kubenswrapper[4985]: I0128 18:16:36.994402 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44d556c9-6c8e-45d3-bec8-303081e8c4e1-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.079241 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.192573 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.198224 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-xqdzz"] Jan 28 18:16:37 crc kubenswrapper[4985]: I0128 18:16:37.272766 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44d556c9-6c8e-45d3-bec8-303081e8c4e1" path="/var/lib/kubelet/pods/44d556c9-6c8e-45d3-bec8-303081e8c4e1/volumes" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.619588 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.619801 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-99vxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-58qq5_openshift-marketplace(ee77ca55-8cd0-4401-afec-9817fee5f6bb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:37 crc kubenswrapper[4985]: E0128 18:16:37.620979 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.537914 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.658645 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.658891 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d86ls,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-vq448_openshift-marketplace(bebbf794-5459-4a75-bff1-92b7551d4784): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.660019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.685221 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.685469 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-89h9m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-mkflh_openshift-marketplace(d797afdd-19c6-45ed-81c8-5fa31175e121): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.686723 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.805773 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.805957 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-glps2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ngcsk_openshift-marketplace(ff1a5336-5c99-49fa-bb89-311781866770): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:38 crc kubenswrapper[4985]: E0128 18:16:38.807525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" Jan 28 18:16:39 crc kubenswrapper[4985]: I0128 18:16:39.419297 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:39 crc kubenswrapper[4985]: I0128 18:16:39.419665 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185644 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185726 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.185794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.186717 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.186812 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" gracePeriod=600 Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.882349 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" exitCode=0 Jan 28 18:16:41 crc kubenswrapper[4985]: I0128 18:16:41.882438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa"} Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743376 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743395 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" Jan 28 18:16:42 crc kubenswrapper[4985]: E0128 18:16:42.743622 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" Jan 28 18:16:42 crc kubenswrapper[4985]: I0128 18:16:42.808185 4985 scope.go:117] "RemoveContainer" containerID="d7be33ff5b68db551839a7b0619faeeabeb41fe748eb7a18f2e5916375270548" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.050412 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.051403 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kj4fx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-tkbjb_openshift-marketplace(4bec6c8f-9678-463c-9e09-5b8e362f2f1b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.052601 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.068231 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.068423 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wpdsv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-2zfzc_openshift-marketplace(478dee72-717a-448e-b14d-15d600c82eb5): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.069725 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.071528 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.071637 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gn4jc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-zcwgk_openshift-marketplace(f17410ee-fc07-4e6c-8262-d3dad9ca4a5d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.072954 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.257386 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 18:16:43 crc kubenswrapper[4985]: W0128 18:16:43.273442 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod490ef8c2_c2f7_4661_9016_d6bbadb543ff.slice/crio-cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0 WatchSource:0}: Error finding container cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0: Status 404 returned error can't find the container with id cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0 Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.333338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.343217 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.345089 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-hrd6k"] Jan 28 18:16:43 crc kubenswrapper[4985]: W0128 18:16:43.374170 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc9a55227_f583_4f77_845f_9938b41aad05.slice/crio-0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5 WatchSource:0}: Error finding container 0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5: Status 404 returned error can't find the container with id 0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5 Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.433123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.918612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.920470 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerStarted","Data":"a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.920532 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerStarted","Data":"cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.921830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerStarted","Data":"230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.921859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerStarted","Data":"0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.922133 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-hpz9q" event={"ID":"25061ce4-ca31-4da7-ad36-c6535e1d2028","Type":"ContainerStarted","Data":"27a6a768d0f7cda3a9be6469f427962f23d0f54576c2de064e4cfba387aa0006"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930300 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930722 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.930789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.932185 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerStarted","Data":"aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.932212 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerStarted","Data":"ad2adfb876654b6fefd1ea75de1738cfc3935a2a867a3438609617e943e0d7b9"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.933218 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.934434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"3bcc15c49ad319492bfc3a7313c76d11980f9fb5262fe5586f8704dea7732913"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.934461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"a75d2e51bc33c85d8fb48bc8f8ff0c7277c0877f520a52b18651a6d98a4378c5"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.938498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.940385 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerStarted","Data":"3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c"} Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.940454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerStarted","Data":"f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b"} Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.941724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.943369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" Jan 28 18:16:43 crc kubenswrapper[4985]: E0128 18:16:43.943784 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" Jan 28 18:16:43 crc kubenswrapper[4985]: I0128 18:16:43.946707 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.013214 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" podStartSLOduration=19.0131859 podStartE2EDuration="19.0131859s" podCreationTimestamp="2026-01-28 18:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:43.983048008 +0000 UTC m=+214.809610829" watchObservedRunningTime="2026-01-28 18:16:44.0131859 +0000 UTC m=+214.839748721" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.112512 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=16.112487 podStartE2EDuration="16.112487s" podCreationTimestamp="2026-01-28 18:16:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.078987372 +0000 UTC m=+214.905550193" watchObservedRunningTime="2026-01-28 18:16:44.112487 +0000 UTC m=+214.939049821" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.112759 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podStartSLOduration=19.112755467 podStartE2EDuration="19.112755467s" podCreationTimestamp="2026-01-28 18:16:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.109774572 +0000 UTC m=+214.936337393" watchObservedRunningTime="2026-01-28 18:16:44.112755467 +0000 UTC m=+214.939318288" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.202452 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=11.202428792 podStartE2EDuration="11.202428792s" podCreationTimestamp="2026-01-28 18:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:44.170877849 +0000 UTC m=+214.997440690" watchObservedRunningTime="2026-01-28 18:16:44.202428792 +0000 UTC m=+215.028991623" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.241544 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.951779 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382" exitCode=0 Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.951860 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.957010 4985 generic.go:334] "Generic (PLEG): container finished" podID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerID="a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410" exitCode=0 Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.957188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerDied","Data":"a8bc81de07eb444f8f7f3c331821e8845288261d63d60d28d416c8c297b87410"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.959352 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-hrd6k" event={"ID":"e38e95b7-0bf3-4fe9-b0c8-ea348ebb83d0","Type":"ContainerStarted","Data":"c49fe4bca42d080f2e058ce4f25686140f849c2dbe753d51cc784e4e644223a4"} Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.960299 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" start-of-body= Jan 28 18:16:44 crc kubenswrapper[4985]: I0128 18:16:44.960391 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": dial tcp 10.217.0.8:8080: connect: connection refused" Jan 28 18:16:45 crc kubenswrapper[4985]: I0128 18:16:45.970804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerStarted","Data":"30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45"} Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.305785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.328490 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-hrd6k" podStartSLOduration=186.328459675 podStartE2EDuration="3m6.328459675s" podCreationTimestamp="2026-01-28 18:13:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:16:45.01547413 +0000 UTC m=+215.842036971" watchObservedRunningTime="2026-01-28 18:16:46.328459675 +0000 UTC m=+217.155022496" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.470791 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") pod \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.470922 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") pod \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\" (UID: \"490ef8c2-c2f7-4661-9016-d6bbadb543ff\") " Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.471084 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "490ef8c2-c2f7-4661-9016-d6bbadb543ff" (UID: "490ef8c2-c2f7-4661-9016-d6bbadb543ff"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.471676 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.481447 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "490ef8c2-c2f7-4661-9016-d6bbadb543ff" (UID: "490ef8c2-c2f7-4661-9016-d6bbadb543ff"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.573767 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/490ef8c2-c2f7-4661-9016-d6bbadb543ff-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.977895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"490ef8c2-c2f7-4661-9016-d6bbadb543ff","Type":"ContainerDied","Data":"cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0"} Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.978341 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cabe12c11673a1180890f6f0d6d87300c980b07016e69a08e8dbb956bdd4b0b0" Jan 28 18:16:46 crc kubenswrapper[4985]: I0128 18:16:46.977957 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.006985 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-nbllw" podStartSLOduration=4.813574879 podStartE2EDuration="1m0.006961865s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.219350054 +0000 UTC m=+161.045912875" lastFinishedPulling="2026-01-28 18:16:45.41273704 +0000 UTC m=+216.239299861" observedRunningTime="2026-01-28 18:16:47.001857819 +0000 UTC m=+217.828420650" watchObservedRunningTime="2026-01-28 18:16:47.006961865 +0000 UTC m=+217.833524676" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.858473 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:47 crc kubenswrapper[4985]: I0128 18:16:47.858897 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:49 crc kubenswrapper[4985]: I0128 18:16:49.433496 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-hpz9q" Jan 28 18:16:49 crc kubenswrapper[4985]: I0128 18:16:49.449559 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" probeResult="failure" output=< Jan 28 18:16:49 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:16:49 crc kubenswrapper[4985]: > Jan 28 18:16:58 crc kubenswrapper[4985]: I0128 18:16:58.096587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:16:58 crc kubenswrapper[4985]: I0128 18:16:58.144056 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.069385 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.071193 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.071240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.074820 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.075078 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.081693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.083960 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.084010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d"} Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.086203 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4" exitCode=0 Jan 28 18:17:02 crc kubenswrapper[4985]: I0128 18:17:02.086264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.093860 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.094343 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.097116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b"} Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.096747 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.103073 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c" exitCode=0 Jan 28 18:17:03 crc kubenswrapper[4985]: I0128 18:17:03.103106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c"} Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.514151 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.514797 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" containerID="cri-o://aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" gracePeriod=30 Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.611686 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:05 crc kubenswrapper[4985]: I0128 18:17:05.611975 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" containerID="cri-o://230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" gracePeriod=30 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.237762 4985 generic.go:334] "Generic (PLEG): container finished" podID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerID="aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" exitCode=0 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.237898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerDied","Data":"aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.239874 4985 generic.go:334] "Generic (PLEG): container finished" podID="c9a55227-f583-4f77-845f-9938b41aad05" containerID="230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" exitCode=0 Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.239958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerDied","Data":"230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.242458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerStarted","Data":"d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82"} Jan 28 18:17:06 crc kubenswrapper[4985]: I0128 18:17:06.274864 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ngcsk" podStartSLOduration=5.266846474 podStartE2EDuration="1m19.274841308s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.308294941 +0000 UTC m=+161.134857762" lastFinishedPulling="2026-01-28 18:17:04.316289765 +0000 UTC m=+235.142852596" observedRunningTime="2026-01-28 18:17:06.272227254 +0000 UTC m=+237.098790085" watchObservedRunningTime="2026-01-28 18:17:06.274841308 +0000 UTC m=+237.101404139" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.080774 4985 patch_prober.go:28] interesting pod/route-controller-manager-76d5df6584-ppscc container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.081181 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.365338 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.399266 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:07 crc kubenswrapper[4985]: E0128 18:17:07.399856 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.399974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: E0128 18:17:07.400066 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400146 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400354 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c9a55227-f583-4f77-845f-9938b41aad05" containerName="route-controller-manager" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.400445 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="490ef8c2-c2f7-4661-9016-d6bbadb543ff" containerName="pruner" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.401113 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.406929 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482513 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") pod \"c9a55227-f583-4f77-845f-9938b41aad05\" (UID: \"c9a55227-f583-4f77-845f-9938b41aad05\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.482890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.483045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.483614 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca" (OuterVolumeSpecName: "client-ca") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.484106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config" (OuterVolumeSpecName: "config") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.492915 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.493500 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb" (OuterVolumeSpecName: "kube-api-access-gfgxb") pod "c9a55227-f583-4f77-845f-9938b41aad05" (UID: "c9a55227-f583-4f77-845f-9938b41aad05"). InnerVolumeSpecName "kube-api-access-gfgxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584777 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584922 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584937 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9a55227-f583-4f77-845f-9938b41aad05-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584949 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfgxb\" (UniqueName: \"kubernetes.io/projected/c9a55227-f583-4f77-845f-9938b41aad05-kube-api-access-gfgxb\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.584989 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9a55227-f583-4f77-845f-9938b41aad05-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.586534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.586957 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.590066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.604702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"route-controller-manager-5746676d8-2r8p5\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.721290 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.890922 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.988871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989057 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989091 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") pod \"c548c555-f5c2-4b49-83f4-ba501eb53a19\" (UID: \"c548c555-f5c2-4b49-83f4-ba501eb53a19\") " Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.989980 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca" (OuterVolumeSpecName: "client-ca") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.990092 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.990287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config" (OuterVolumeSpecName: "config") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.994144 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442" (OuterVolumeSpecName: "kube-api-access-fw442") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "kube-api-access-fw442". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:07 crc kubenswrapper[4985]: I0128 18:17:07.994184 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c548c555-f5c2-4b49-83f4-ba501eb53a19" (UID: "c548c555-f5c2-4b49-83f4-ba501eb53a19"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.087151 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.087233 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.090760 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c548c555-f5c2-4b49-83f4-ba501eb53a19-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091317 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091336 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091350 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fw442\" (UniqueName: \"kubernetes.io/projected/c548c555-f5c2-4b49-83f4-ba501eb53a19-kube-api-access-fw442\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.091366 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c548c555-f5c2-4b49-83f4-ba501eb53a19-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.150857 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" event={"ID":"c9a55227-f583-4f77-845f-9938b41aad05","Type":"ContainerDied","Data":"0c1baf91463f290c3d892cb40e61b3d124856adc600caf5a5be88ecc069eded5"} Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255323 4985 scope.go:117] "RemoveContainer" containerID="230a32e1704bbf1bfdb865092f83b3a4dcbb6f3d1684e2401748ed37926d4bea" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.255334 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.257368 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.257399 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5869bdf574-ch68d" event={"ID":"c548c555-f5c2-4b49-83f4-ba501eb53a19","Type":"ContainerDied","Data":"ad2adfb876654b6fefd1ea75de1738cfc3935a2a867a3438609617e943e0d7b9"} Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.289679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.294765 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5869bdf574-ch68d"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.299577 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.302753 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-76d5df6584-ppscc"] Jan 28 18:17:08 crc kubenswrapper[4985]: I0128 18:17:08.506361 4985 scope.go:117] "RemoveContainer" containerID="aca47457e78cbdad7584b3f87da1ee68b51f7fcffc325c44756fb3b2a97df8ce" Jan 28 18:17:09 crc kubenswrapper[4985]: I0128 18:17:09.276097 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" path="/var/lib/kubelet/pods/c548c555-f5c2-4b49-83f4-ba501eb53a19/volumes" Jan 28 18:17:09 crc kubenswrapper[4985]: I0128 18:17:09.276685 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9a55227-f583-4f77-845f-9938b41aad05" path="/var/lib/kubelet/pods/c9a55227-f583-4f77-845f-9938b41aad05/volumes" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:17:10 crc kubenswrapper[4985]: E0128 18:17:10.018835 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018851 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.018978 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c548c555-f5c2-4b49-83f4-ba501eb53a19" containerName="controller-manager" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.019636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026305 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026676 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.026988 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027140 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.027460 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.030936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.035787 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.118992 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.219875 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.290270 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.291137 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.295578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.296591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.321589 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"controller-manager-7f8cf88bf9-bvxk6\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.349110 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:10 crc kubenswrapper[4985]: I0128 18:17:10.740284 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:17:12 crc kubenswrapper[4985]: I0128 18:17:12.283559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"61b704f839468f67ac0c3f15e67acd552ecf612f482f58ba44a89c002ae8c45b"} Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.146702 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.210602 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk"] Jan 28 18:17:18 crc kubenswrapper[4985]: I0128 18:17:18.333880 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" containerID="cri-o://d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" gracePeriod=2 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.455805 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457092 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457599 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457667 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457642 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457746 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.457765 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" gracePeriod=15 Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.459980 4985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460362 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460399 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460435 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460464 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460476 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460506 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460519 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460531 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460549 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460561 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.460578 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460591 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460799 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460817 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460835 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460853 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460873 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.460888 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 18:17:21 crc kubenswrapper[4985]: E0128 18:17:21.461072 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.461086 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.461348 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.524168 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622221 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622285 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622326 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622505 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.622654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723582 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.723877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724528 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724733 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.724575 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.725102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:21 crc kubenswrapper[4985]: I0128 18:17:21.817496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:17:22 crc kubenswrapper[4985]: I0128 18:17:22.413399 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 28 18:17:22 crc kubenswrapper[4985]: I0128 18:17:22.413475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 28 18:17:24 crc kubenswrapper[4985]: E0128 18:17:24.473483 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:25 crc kubenswrapper[4985]: I0128 18:17:25.391100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerStarted","Data":"01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58"} Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.166691 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.167311 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.168155 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.168663 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.169039 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.169088 4985 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.169492 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="200ms" Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.370319 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="400ms" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.400787 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.402515 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:26 crc kubenswrapper[4985]: I0128 18:17:26.403442 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" exitCode=2 Jan 28 18:17:26 crc kubenswrapper[4985]: E0128 18:17:26.771592 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="800ms" Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.410663 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.411630 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff1a5336-5c99-49fa-bb89-311781866770" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" exitCode=137 Jan 28 18:17:27 crc kubenswrapper[4985]: I0128 18:17:27.411680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82"} Jan 28 18:17:27 crc kubenswrapper[4985]: E0128 18:17:27.572055 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="1.6s" Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.087544 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088207 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088735 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.088772 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.420931 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.423232 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.424225 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" exitCode=0 Jan 28 18:17:28 crc kubenswrapper[4985]: I0128 18:17:28.424292 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" exitCode=0 Jan 28 18:17:28 crc kubenswrapper[4985]: E0128 18:17:28.733162 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:29 crc kubenswrapper[4985]: E0128 18:17:29.173703 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="3.2s" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.437713 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.441883 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443551 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443610 4985 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.443653 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.449699 4985 generic.go:334] "Generic (PLEG): container finished" podID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerID="3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c" exitCode=0 Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.449930 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerDied","Data":"3c8ef3ffe3a3beb101ee44bb4477a152e2c2c1d60d8d32877bb5661a8b94361c"} Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.451029 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.451751 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.452193 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.452667 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:29 crc kubenswrapper[4985]: I0128 18:17:29.453163 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.269359 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.270001 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:31 crc kubenswrapper[4985]: I0128 18:17:31.270697 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:32 crc kubenswrapper[4985]: E0128 18:17:32.343118 4985 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" volumeName="registry-storage" Jan 28 18:17:32 crc kubenswrapper[4985]: E0128 18:17:32.375969 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="6.4s" Jan 28 18:17:34 crc kubenswrapper[4985]: I0128 18:17:34.348007 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 28 18:17:34 crc kubenswrapper[4985]: I0128 18:17:34.348729 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503450 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503533 4985 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" exitCode=1 Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.503590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db"} Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.504327 4985 scope.go:117] "RemoveContainer" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.505071 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.505769 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.507677 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:36 crc kubenswrapper[4985]: I0128 18:17:36.508223 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.572746 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.667538 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.667629 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.730987 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.737374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.738270 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.738681 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.739176 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:37 crc kubenswrapper[4985]: I0128 18:17:37.739649 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.088161 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.089160 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.090424 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.090536 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-ngcsk" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.588416 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.589368 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.590145 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.591280 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: I0128 18:17:38.592392 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:38 crc kubenswrapper[4985]: E0128 18:17:38.735283 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:38.777853 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.089092 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.090090 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.090663 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.091393 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.091718 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115512 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115590 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115680 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") pod \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\" (UID: \"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock" (OuterVolumeSpecName: "var-lock") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.115936 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.116375 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.116399 4985 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.125311 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" (UID: "a97e98d6-b3fb-4d0b-a91e-00e4d18089c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.217343 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a97e98d6-b3fb-4d0b-a91e-00e4d18089c9-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.273075 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.273835 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.274736 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.275378 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545069 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"a97e98d6-b3fb-4d0b-a91e-00e4d18089c9","Type":"ContainerDied","Data":"f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545136 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f249e6a9045822ac8356aabfe2373c714fcb3fec9f0635e367520cd44059c81b" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.545185 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.552404 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.553112 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.553944 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:41.554571 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.070939 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.071775 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.072207 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.072701 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073422 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073602 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.073854 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.074671 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.075559 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076269 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076581 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.076998 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077395 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077630 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.077931 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131835 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.131992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132027 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132166 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132205 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") pod \"ff1a5336-5c99-49fa-bb89-311781866770\" (UID: \"ff1a5336-5c99-49fa-bb89-311781866770\") " Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132982 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.133020 4985 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132992 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities" (OuterVolumeSpecName: "utilities") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.132123 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.137929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2" (OuterVolumeSpecName: "kube-api-access-glps2") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "kube-api-access-glps2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234723 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234766 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.234786 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glps2\" (UniqueName: \"kubernetes.io/projected/ff1a5336-5c99-49fa-bb89-311781866770-kube-api-access-glps2\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.556654 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.557570 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.559193 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-ngcsk_ff1a5336-5c99-49fa-bb89-311781866770/registry-server/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.560134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ngcsk" event={"ID":"ff1a5336-5c99-49fa-bb89-311781866770","Type":"ContainerDied","Data":"443d55c2efdfe0f8e6f7fa0e88bf057b626e08f470a93af561b93e9387fb0988"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.560298 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ngcsk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.561595 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.562196 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.562773 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.563388 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.566230 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.566977 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575129 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575508 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.575815 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.576136 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.576977 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:42.577387 4985 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:43.290967 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:44.347683 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.539491 4985 scope.go:117] "RemoveContainer" containerID="7eed0822087f3a62433dc217356d56168d324ce3fd135e1588dce79ff081e861" Jan 28 18:17:55 crc kubenswrapper[4985]: W0128 18:17:45.596073 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc WatchSource:0}: Error finding container da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc: Status 404 returned error can't find the container with id da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.600241 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.645057 4985 scope.go:117] "RemoveContainer" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:45.659609 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": container with ID starting with 58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4 not found: ID does not exist" containerID="58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.659990 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4"} err="failed to get container status \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": rpc error: code = NotFound desc = could not find container \"58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4\": container with ID starting with 58d16ff1a3ed4df5b6d4043d24126ea9a5701f6b38c4660d31ceb38b0750b4f4 not found: ID does not exist" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.660029 4985 scope.go:117] "RemoveContainer" containerID="094c34dbabd2c2f0b72eed33002259925f33e02fce084d98d88878cf9019b9a0" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.734135 4985 scope.go:117] "RemoveContainer" containerID="270af1976a13be4c781a409dc2babf919ff4ae3da2ad19d8db3565ff272dd1c6" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.765499 4985 scope.go:117] "RemoveContainer" containerID="001328fc586387a939bdd32074008e40c044e49beba4fbca898eba5919cfbc3a" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:45.778754 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.795010 4985 scope.go:117] "RemoveContainer" containerID="88fe0142f7b6babc60c91331c69d8d516ce31933818afe388be7aaba84f70b44" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.816598 4985 scope.go:117] "RemoveContainer" containerID="ec1464f43641435bb7f7730aeeb2e8cb094d428bd39d915dbfb60d58f7d36415" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.858343 4985 scope.go:117] "RemoveContainer" containerID="d4c7394b087a7cb74643734b40a07edfaed2e359b0d40d6e269819c6f1302e82" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.881874 4985 scope.go:117] "RemoveContainer" containerID="3b65c4cdfefa99481aa1051361932ec6ad9c250e75289c86b535f66431840968" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:45.897697 4985 scope.go:117] "RemoveContainer" containerID="081b66f566faa6677cfda3978e83d93b4dce7e5760fe6c65c107d2c177beeb71" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.221137 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ff1a5336-5c99-49fa-bb89-311781866770" (UID: "ff1a5336-5c99-49fa-bb89-311781866770"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.303604 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ff1a5336-5c99-49fa-bb89-311781866770-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.491902 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.492880 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.493465 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.493951 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.494494 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:46.619041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"da5c3365696303ccc352a5b0405df920571d579c6b1c1efd838229e335c6e2cc"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.263342 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.264950 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.265708 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266149 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266562 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.266950 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.286999 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.287037 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:47.287685 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.288423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: W0128 18:17:47.320349 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25 WatchSource:0}: Error finding container 7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25: Status 404 returned error can't find the container with id 7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25 Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.630990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerStarted","Data":"eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.633171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7ef78ab0fad28bb68e3f5443a429f16a3dd5218795b594c148cacaa1a2477f25"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.645524 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.645632 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.648022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerStarted","Data":"31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.652006 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerStarted","Data":"3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.657388 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerStarted","Data":"98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.658913 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.661859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerStarted","Data":"9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:47.663446 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f"} Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:48.736133 4985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/events\": dial tcp 38.102.83.195:6443: connect: connection refused" event="&Event{ObjectMeta:{certified-operators-58qq5.188ef7dfddb617e6 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-58qq5,UID:ee77ca55-8cd0-4401-afec-9817fee5f6bb,APIVersion:v1,ResourceVersion:28142,FieldPath:spec.containers{registry-server},},Reason:Created,Message:Created container registry-server,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,LastTimestamp:2026-01-28 18:17:24.472649702 +0000 UTC m=+255.299212553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.678856 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679658 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679867 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.679992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680067 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680332 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680574 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680810 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.680953 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681110 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681317 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681541 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681802 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.681970 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682112 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682281 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682453 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.682602 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.870657 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T18:17:49Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:2c1439ebdda893daf377def2d4397762658d82b531bb83f7ae41a4e7f26d4407\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:c044fa5dc076cb0fb053c5a676c39093e5fd06f6cc0eeaff8a747680c99c8b7f\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1675724519},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:40a0af9b58137c413272f3533763f7affd5db97e6ef410a6aeabce6d81a246ee\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:7e9b6f6bdbfa69f6106bc85eaee51d908ede4be851b578362af443af6bf732a8\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1202031349},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:364f5956de22b63db7dad4fcdd1f2740f71a482026c15aa3e2abebfbc5bf2fd7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:d3d262f90dd0f3c3f809b45f327ca086741a47f73e44560b04787609f0f99567\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1187310829},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:420326d8488ceff2cde22ad8b85d739b0c254d47e703f7ddb1f08f77a48816a6\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:54817da328fa589491a3acbe80acdd88c0830dcc63aaafc08c3539925a1a3b03\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1180692192},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.871406 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.872053 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.872639 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.873282 4985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:49.873320 4985 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.945509 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:49.945705 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.002938 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.003722 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.004371 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.004859 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.005359 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.005721 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006197 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006623 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.006870 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007054 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007240 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.007483 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.254218 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.254311 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.294588 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.295519 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.296193 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.296844 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.297196 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.297673 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298116 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298523 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.298891 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.299304 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.299662 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.300071 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.680023 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.680099 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.888332 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:50.888403 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.270695 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.271255 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.271628 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.272011 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.272879 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.273738 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274172 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274408 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274603 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.274869 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.275089 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.275366 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.685739 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.685814 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.694176 4985 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="7fc250dcdccc741c807afcb3a8ac8715854616989d2d2a8934a498aee980197f" exitCode=0 Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.694378 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"7fc250dcdccc741c807afcb3a8ac8715854616989d2d2a8934a498aee980197f"} Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:51.951464 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" probeResult="failure" output=< Jan 28 18:17:55 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:17:55 crc kubenswrapper[4985]: > Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:52.310660 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" probeResult="failure" output=< Jan 28 18:17:55 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:17:55 crc kubenswrapper[4985]: > Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:52.780181 4985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.195:6443: connect: connection refused" interval="7s" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715247 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715830 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.715985 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: E0128 18:17:54.716412 4985 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.716784 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.717336 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.717786 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.718343 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.718876 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.719412 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.719859 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.720372 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.720869 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.721386 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:55 crc kubenswrapper[4985]: I0128 18:17:54.721828 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014386 4985 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014580 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014609 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 18:17:56 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e" Netns:"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:56 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:56 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: E0128 18:17:56.014667 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e\\\" Netns:\\\"/var/run/netns/4eb08ed8-3d76-4238-9b6a-71757c20ed1a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=6a4d7754e1f00e30d8ac0b3354013710342aa644194c9e2c94066df7ad6cfd2e;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s\\\": dial tcp 38.102.83.195:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" Jan 28 18:17:56 crc kubenswrapper[4985]: I0128 18:17:56.727643 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:56 crc kubenswrapper[4985]: I0128 18:17:56.728707 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403404 4985 log.go:32] "RunPodSandbox from runtime service failed" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403821 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403852 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Jan 28 18:17:57 crc kubenswrapper[4985]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5" Netns:"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s": dial tcp 38.102.83.195:6443: connect: connection refused Jan 28 18:17:57 crc kubenswrapper[4985]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Jan 28 18:17:57 crc kubenswrapper[4985]: > pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:17:57 crc kubenswrapper[4985]: E0128 18:17:57.403930 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager(eefb5804-82d5-488f-a5c4-5473107ffbcd)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f8cf88bf9-bvxk6_openshift-controller-manager_eefb5804-82d5-488f-a5c4-5473107ffbcd_0(5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5): error adding pod openshift-controller-manager_controller-manager-7f8cf88bf9-bvxk6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5\\\" Netns:\\\"/var/run/netns/39cb782c-5ce0-470e-9072-793910fd8755\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f8cf88bf9-bvxk6;K8S_POD_INFRA_CONTAINER_ID=5d1aa8fddab71b6d48f1422f6742e58a618cd93ee3a83151f0cbf61509c37fd5;K8S_POD_UID=eefb5804-82d5-488f-a5c4-5473107ffbcd\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6] networking: Multus: [openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6/eefb5804-82d5-488f-a5c4-5473107ffbcd]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f8cf88bf9-bvxk6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f8cf88bf9-bvxk6?timeout=1m0s\\\": dial tcp 38.102.83.195:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.572155 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.731021 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.738503 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.739251 4985 status_manager.go:851] "Failed to get status for pod" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" pod="openshift-marketplace/redhat-marketplace-mkflh" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-mkflh\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.739752 4985 status_manager.go:851] "Failed to get status for pod" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" pod="openshift-marketplace/community-operators-tkbjb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-tkbjb\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.740146 4985 status_manager.go:851] "Failed to get status for pod" podUID="478dee72-717a-448e-b14d-15d600c82eb5" pod="openshift-marketplace/redhat-operators-2zfzc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-2zfzc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.740717 4985 status_manager.go:851] "Failed to get status for pod" podUID="ff1a5336-5c99-49fa-bb89-311781866770" pod="openshift-marketplace/certified-operators-ngcsk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-ngcsk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.741386 4985 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.741830 4985 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.742369 4985 status_manager.go:851] "Failed to get status for pod" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" pod="openshift-marketplace/redhat-operators-zcwgk" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-zcwgk\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.742822 4985 status_manager.go:851] "Failed to get status for pod" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5746676d8-2r8p5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.743200 4985 status_manager.go:851] "Failed to get status for pod" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" pod="openshift-marketplace/redhat-marketplace-vq448" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-vq448\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.743650 4985 status_manager.go:851] "Failed to get status for pod" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.744090 4985 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:57 crc kubenswrapper[4985]: I0128 18:17:57.744483 4985 status_manager.go:851] "Failed to get status for pod" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" pod="openshift-marketplace/certified-operators-58qq5" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/certified-operators-58qq5\": dial tcp 38.102.83.195:6443: connect: connection refused" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.353153 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.354205 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.409555 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.722977 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.723046 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.745290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"eab48cfc75705407bcf2bbf163efe5df0cb78ef2f172e3537db0797494e3a428"} Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.754517 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 18:17:58 crc kubenswrapper[4985]: I0128 18:17:58.794433 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:17:59 crc kubenswrapper[4985]: I0128 18:17:59.994309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:00 crc kubenswrapper[4985]: I0128 18:18:00.307181 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:00 crc kubenswrapper[4985]: I0128 18:18:00.959143 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.026554 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.336233 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.381582 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:01 crc kubenswrapper[4985]: I0128 18:18:01.778194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"df43313715c9b9250dd6b76cc9f81680195396e592f7b9beb1e364154316870d"} Jan 28 18:18:03 crc kubenswrapper[4985]: I0128 18:18:03.789482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"668a5a1fca394af9b85431e312e789be889070149007fbf6585536a96d26d7e3"} Jan 28 18:18:04 crc kubenswrapper[4985]: I0128 18:18:04.800104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"867aef87f8404ba4d3244cbda663689a7da1991c53c5c338f80f4de59d8dd642"} Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815365 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"98788e330c099ce5091b6f6069a917953b2497db56421633098d963cf693ce46"} Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815902 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.815930 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.816404 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.829622 4985 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:05 crc kubenswrapper[4985]: I0128 18:18:05.839415 4985 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"message\\\":\\\"containers with unready status: [kube-apiserver kube-apiserver-check-endpoints]\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://eab48cfc75705407bcf2bbf163efe5df0cb78ef2f172e3537db0797494e3a428\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:17:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://668a5a1fca394af9b85431e312e789be889070149007fbf6585536a96d26d7e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://df43313715c9b9250dd6b76cc9f81680195396e592f7b9beb1e364154316870d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://98788e330c099ce5091b6f6069a917953b2497db56421633098d963cf693ce46\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://867aef87f8404ba4d3244cbda663689a7da1991c53c5c338f80f4de59d8dd642\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T18:18:04Z\\\"}}}],\\\"phase\\\":\\\"Running\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": pods \"kube-apiserver-crc\" not found" Jan 28 18:18:06 crc kubenswrapper[4985]: I0128 18:18:06.834711 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:06 crc kubenswrapper[4985]: I0128 18:18:06.834766 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.289444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.289729 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.295547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.299347 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8947ea9f-4373-478d-b3c5-ea73f8a66c61" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.841408 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.841974 4985 generic.go:334] "Generic (PLEG): container finished" podID="ef543e1b-8068-4ea3-b32a-61027b32e95d" containerID="8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531" exitCode=1 Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerDied","Data":"8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531"} Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842568 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842599 4985 scope.go:117] "RemoveContainer" containerID="8af135d2d3fbdf0259b675305bb7932cd1e1f839412e4745be6961b692854531" Jan 28 18:18:07 crc kubenswrapper[4985]: I0128 18:18:07.842613 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.721980 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.722678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.853712 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-vrzqb_ef543e1b-8068-4ea3-b32a-61027b32e95d/approver/0.log" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857357 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857651 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.857581 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"31af50e34fa620a5f81294ac0c220bee2c83cbdfd6c8e6b71423c865edabfac5"} Jan 28 18:18:08 crc kubenswrapper[4985]: I0128 18:18:08.865799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:09 crc kubenswrapper[4985]: I0128 18:18:09.864144 4985 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:09 crc kubenswrapper[4985]: I0128 18:18:09.864199 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="8d2cce12-d1b1-4c81-bbe0-b32a7ff2f6a5" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.263724 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.264678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.701653 4985 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerStarted","Data":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerStarted","Data":"5b05bb1b67bf56c71462a79b529ac2543e0047903c359f6e9fac94a35e5f7aac"} Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.874855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.877005 4985 patch_prober.go:28] interesting pod/controller-manager-7f8cf88bf9-bvxk6 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" start-of-body= Jan 28 18:18:10 crc kubenswrapper[4985]: I0128 18:18:10.877095 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.60:8443/healthz\": dial tcp 10.217.0.60:8443: connect: connection refused" Jan 28 18:18:11 crc kubenswrapper[4985]: I0128 18:18:11.328417 4985 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="8947ea9f-4373-478d-b3c5-ea73f8a66c61" Jan 28 18:18:11 crc kubenswrapper[4985]: I0128 18:18:11.886986 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.803569 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35284->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.804536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35284->10.217.0.58:8443: read: connection reset by peer" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.803659 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35282->10.217.0.58:8443: read: connection reset by peer" start-of-body= Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.804970 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": read tcp 10.217.0.2:35282->10.217.0.58:8443: read: connection reset by peer" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927581 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927644 4985 generic.go:334] "Generic (PLEG): container finished" podID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" exitCode=255 Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.927687 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f"} Jan 28 18:18:17 crc kubenswrapper[4985]: I0128 18:18:17.928313 4985 scope.go:117] "RemoveContainer" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.939171 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.939686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerStarted","Data":"c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca"} Jan 28 18:18:18 crc kubenswrapper[4985]: I0128 18:18:18.940405 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:19 crc kubenswrapper[4985]: I0128 18:18:19.940320 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:19 crc kubenswrapper[4985]: I0128 18:18:19.940413 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:20 crc kubenswrapper[4985]: I0128 18:18:20.947156 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:20 crc kubenswrapper[4985]: I0128 18:18:20.947233 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:28 crc kubenswrapper[4985]: I0128 18:18:28.722377 4985 patch_prober.go:28] interesting pod/route-controller-manager-5746676d8-2r8p5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 18:18:28 crc kubenswrapper[4985]: I0128 18:18:28.723525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.58:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.488547 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.727871 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:18:30 crc kubenswrapper[4985]: I0128 18:18:30.902834 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.235038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.293864 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 18:18:31 crc kubenswrapper[4985]: I0128 18:18:31.846963 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.146371 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.422265 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.651111 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.741840 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.855702 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.960311 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 18:18:32 crc kubenswrapper[4985]: I0128 18:18:32.969626 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.207106 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.405857 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.674323 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.841894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.884558 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.942769 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.957664 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 18:18:33 crc kubenswrapper[4985]: I0128 18:18:33.980158 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.033070 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.054236 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.065391 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.225177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.275038 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.331936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.786321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.876614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 18:18:34 crc kubenswrapper[4985]: I0128 18:18:34.995348 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.063169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.234886 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.431843 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.736791 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.755971 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.775636 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.796193 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 18:18:35 crc kubenswrapper[4985]: I0128 18:18:35.860684 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.073216 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.216901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.273339 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.481908 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.511699 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.515723 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.698479 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.856819 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 18:18:36 crc kubenswrapper[4985]: I0128 18:18:36.899484 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.062040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.174047 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.445041 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.527720 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 18:18:37 crc kubenswrapper[4985]: I0128 18:18:37.727613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.117430 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.180679 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.286975 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.542081 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.612817 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.729433 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 18:18:38 crc kubenswrapper[4985]: I0128 18:18:38.783535 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.060583 4985 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.263708 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.646376 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.684824 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:18:39 crc kubenswrapper[4985]: I0128 18:18:39.792674 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.334205 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.351439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.410876 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.454487 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.611398 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.708535 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 18:18:40 crc kubenswrapper[4985]: I0128 18:18:40.864496 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.058577 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.068860 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.117356 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.170941 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.188613 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.329496 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.384367 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.577110 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.579439 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 18:18:41 crc kubenswrapper[4985]: I0128 18:18:41.962355 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.341685 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.507380 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.761526 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.823341 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.834399 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.889034 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 18:18:42 crc kubenswrapper[4985]: I0128 18:18:42.920234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.102584 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.145674 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.250564 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.257800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.401726 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.517941 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.699239 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.730232 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.766439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 18:18:43 crc kubenswrapper[4985]: I0128 18:18:43.906628 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.337968 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.362800 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.376099 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.400096 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.488064 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.556719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 18:18:44 crc kubenswrapper[4985]: I0128 18:18:44.563577 4985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.056672 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.059139 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.119679 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.334038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.429422 4985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431287 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zcwgk" podStartSLOduration=66.827691584 podStartE2EDuration="2m55.431243162s" podCreationTimestamp="2026-01-28 18:15:50 +0000 UTC" firstStartedPulling="2026-01-28 18:15:52.412135849 +0000 UTC m=+163.238698670" lastFinishedPulling="2026-01-28 18:17:41.015687417 +0000 UTC m=+271.842250248" observedRunningTime="2026-01-28 18:18:04.794441196 +0000 UTC m=+295.621004017" watchObservedRunningTime="2026-01-28 18:18:45.431243162 +0000 UTC m=+336.257805983" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431427 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkbjb" podStartSLOduration=64.503243063 podStartE2EDuration="2m58.431422307s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:50.228891505 +0000 UTC m=+161.055454326" lastFinishedPulling="2026-01-28 18:17:44.157070719 +0000 UTC m=+274.983633570" observedRunningTime="2026-01-28 18:18:04.705890763 +0000 UTC m=+295.532453584" watchObservedRunningTime="2026-01-28 18:18:45.431422307 +0000 UTC m=+336.257985128" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.431893 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-58qq5" podStartSLOduration=99.114942799 podStartE2EDuration="2m58.43188673s" podCreationTimestamp="2026-01-28 18:15:47 +0000 UTC" firstStartedPulling="2026-01-28 18:15:49.189814787 +0000 UTC m=+160.016377608" lastFinishedPulling="2026-01-28 18:17:08.506758708 +0000 UTC m=+239.333321539" observedRunningTime="2026-01-28 18:18:04.670851672 +0000 UTC m=+295.497414493" watchObservedRunningTime="2026-01-28 18:18:45.43188673 +0000 UTC m=+336.258449551" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.433011 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2zfzc" podStartSLOduration=62.243034209 podStartE2EDuration="2m55.433004883s" podCreationTimestamp="2026-01-28 18:15:50 +0000 UTC" firstStartedPulling="2026-01-28 18:15:52.341707937 +0000 UTC m=+163.168270758" lastFinishedPulling="2026-01-28 18:17:45.531678581 +0000 UTC m=+276.358241432" observedRunningTime="2026-01-28 18:18:04.747572725 +0000 UTC m=+295.574135546" watchObservedRunningTime="2026-01-28 18:18:45.433004883 +0000 UTC m=+336.259567694" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.434207 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podStartSLOduration=100.434198327 podStartE2EDuration="1m40.434198327s" podCreationTimestamp="2026-01-28 18:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:10.897382314 +0000 UTC m=+301.723945145" watchObservedRunningTime="2026-01-28 18:18:45.434198327 +0000 UTC m=+336.260761148" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.434902 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=84.434895427 podStartE2EDuration="1m24.434895427s" podCreationTimestamp="2026-01-28 18:17:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:04.760163458 +0000 UTC m=+295.586726289" watchObservedRunningTime="2026-01-28 18:18:45.434895427 +0000 UTC m=+336.261458248" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435181 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mkflh" podStartSLOduration=97.429433825 podStartE2EDuration="2m56.435173375s" podCreationTimestamp="2026-01-28 18:15:49 +0000 UTC" firstStartedPulling="2026-01-28 18:15:51.322142723 +0000 UTC m=+162.148705544" lastFinishedPulling="2026-01-28 18:17:10.327882263 +0000 UTC m=+241.154445094" observedRunningTime="2026-01-28 18:18:04.689774458 +0000 UTC m=+295.516337279" watchObservedRunningTime="2026-01-28 18:18:45.435173375 +0000 UTC m=+336.261736196" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435297 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-vq448" podStartSLOduration=62.793682934 podStartE2EDuration="2m56.435291869s" podCreationTimestamp="2026-01-28 18:15:49 +0000 UTC" firstStartedPulling="2026-01-28 18:15:51.335014809 +0000 UTC m=+162.161577630" lastFinishedPulling="2026-01-28 18:17:44.976623714 +0000 UTC m=+275.803186565" observedRunningTime="2026-01-28 18:18:04.825068979 +0000 UTC m=+295.651631810" watchObservedRunningTime="2026-01-28 18:18:45.435291869 +0000 UTC m=+336.261854710" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.435540 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podStartSLOduration=100.435535306 podStartE2EDuration="1m40.435535306s" podCreationTimestamp="2026-01-28 18:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:04.810352875 +0000 UTC m=+295.636915696" watchObservedRunningTime="2026-01-28 18:18:45.435535306 +0000 UTC m=+336.262098137" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436233 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ngcsk","openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436310 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436354 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk","openshift-marketplace/community-operators-tkbjb","openshift-marketplace/certified-operators-58qq5","openshift-marketplace/community-operators-nbllw","openshift-marketplace/redhat-operators-2zfzc","openshift-marketplace/marketplace-operator-79b997595-b5wzm","openshift-marketplace/redhat-marketplace-mkflh","openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436652 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-vq448" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" containerID="cri-o://31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.436956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-58qq5" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" containerID="cri-o://01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.437437 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zcwgk" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" containerID="cri-o://eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.437739 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" containerID="cri-o://f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438461 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2zfzc" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" containerID="cri-o://98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438738 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mkflh" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" containerID="cri-o://9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438831 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-nbllw" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" containerID="cri-o://30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.438439 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkbjb" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" containerID="cri-o://3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.463667 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.524989 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=40.524960982 podStartE2EDuration="40.524960982s" podCreationTimestamp="2026-01-28 18:18:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:45.521434331 +0000 UTC m=+336.347997142" watchObservedRunningTime="2026-01-28 18:18:45.524960982 +0000 UTC m=+336.351523823" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.580674 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.675382 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.675700 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" containerID="cri-o://c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" gracePeriod=30 Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.780430 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 18:18:45 crc kubenswrapper[4985]: I0128 18:18:45.823470 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.119729 4985 generic.go:334] "Generic (PLEG): container finished" podID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerID="30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.119942 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.122832 4985 generic.go:334] "Generic (PLEG): container finished" podID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerID="01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.122891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.126400 4985 generic.go:334] "Generic (PLEG): container finished" podID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerID="9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.126453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.129031 4985 generic.go:334] "Generic (PLEG): container finished" podID="478dee72-717a-448e-b14d-15d600c82eb5" containerID="98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.129094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131748 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-5746676d8-2r8p5_e5f99d20-5afa-4144-b66e-9198c1d6c66d/route-controller-manager/0.log" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131797 4985 generic.go:334] "Generic (PLEG): container finished" podID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerID="c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.131892 4985 scope.go:117] "RemoveContainer" containerID="84b3d1329602db518e01bb880483420a7b93445de8d4de35994516e44034e79f" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.135315 4985 generic.go:334] "Generic (PLEG): container finished" podID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerID="eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.135408 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.137738 4985 generic.go:334] "Generic (PLEG): container finished" podID="bebbf794-5459-4a75-bff1-92b7551d4784" containerID="31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.137801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.139089 4985 generic.go:334] "Generic (PLEG): container finished" podID="7b3b0534-3356-446a-91e8-dae980c402db" containerID="f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.139158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerDied","Data":"f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.140983 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerID="3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" exitCode=0 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.141234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68"} Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.141364 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" containerID="cri-o://a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" gracePeriod=30 Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.318497 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.368207 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.390451 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521857 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.521985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522003 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522053 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") pod \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\" (UID: \"e5f99d20-5afa-4144-b66e-9198c1d6c66d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") pod \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\" (UID: \"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.522843 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523214 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities" (OuterVolumeSpecName: "utilities") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523272 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca" (OuterVolumeSpecName: "client-ca") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.523343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config" (OuterVolumeSpecName: "config") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.530967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx" (OuterVolumeSpecName: "kube-api-access-rzrfx") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "kube-api-access-rzrfx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.531030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb" (OuterVolumeSpecName: "kube-api-access-q9lxb") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "kube-api-access-q9lxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.533786 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.534175 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e5f99d20-5afa-4144-b66e-9198c1d6c66d" (UID: "e5f99d20-5afa-4144-b66e-9198c1d6c66d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.548153 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.597274 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" (UID: "b3c2ecc0-c6a6-468b-bdcf-e84c2831a580"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623008 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623207 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") pod \"bebbf794-5459-4a75-bff1-92b7551d4784\" (UID: \"bebbf794-5459-4a75-bff1-92b7551d4784\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623542 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623560 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623573 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5f99d20-5afa-4144-b66e-9198c1d6c66d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623585 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzrfx\" (UniqueName: \"kubernetes.io/projected/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580-kube-api-access-rzrfx\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623596 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623608 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5f99d20-5afa-4144-b66e-9198c1d6c66d-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.623620 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9lxb\" (UniqueName: \"kubernetes.io/projected/e5f99d20-5afa-4144-b66e-9198c1d6c66d-kube-api-access-q9lxb\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.625007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities" (OuterVolumeSpecName: "utilities") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.627695 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls" (OuterVolumeSpecName: "kube-api-access-d86ls") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "kube-api-access-d86ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.659769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bebbf794-5459-4a75-bff1-92b7551d4784" (UID: "bebbf794-5459-4a75-bff1-92b7551d4784"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.663637 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.668903 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.672055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.713902 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.715163 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.716328 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725162 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725428 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") pod \"d797afdd-19c6-45ed-81c8-5fa31175e121\" (UID: \"d797afdd-19c6-45ed-81c8-5fa31175e121\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725802 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725827 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bebbf794-5459-4a75-bff1-92b7551d4784-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.725838 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d86ls\" (UniqueName: \"kubernetes.io/projected/bebbf794-5459-4a75-bff1-92b7551d4784-kube-api-access-d86ls\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.728793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities" (OuterVolumeSpecName: "utilities") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.732736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m" (OuterVolumeSpecName: "kube-api-access-89h9m") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "kube-api-access-89h9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.755351 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d797afdd-19c6-45ed-81c8-5fa31175e121" (UID: "d797afdd-19c6-45ed-81c8-5fa31175e121"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.791434 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826731 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826820 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826846 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826868 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.826907 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.827752 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities" (OuterVolumeSpecName: "utilities") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.827941 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities" (OuterVolumeSpecName: "utilities") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.828051 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.829787 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj" (OuterVolumeSpecName: "kube-api-access-99vxj") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "kube-api-access-99vxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.830657 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv" (OuterVolumeSpecName: "kube-api-access-wpdsv") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "kube-api-access-wpdsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.831304 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g" (OuterVolumeSpecName: "kube-api-access-2b84g") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "kube-api-access-2b84g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832349 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832391 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") pod \"478dee72-717a-448e-b14d-15d600c82eb5\" (UID: \"478dee72-717a-448e-b14d-15d600c82eb5\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832482 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.832556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") pod \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\" (UID: \"ee77ca55-8cd0-4401-afec-9817fee5f6bb\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.833542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities" (OuterVolumeSpecName: "utilities") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.833916 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") pod \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\" (UID: \"4bec6c8f-9678-463c-9e09-5b8e362f2f1b\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.834383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config" (OuterVolumeSpecName: "config") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835231 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") pod \"eefb5804-82d5-488f-a5c4-5473107ffbcd\" (UID: \"eefb5804-82d5-488f-a5c4-5473107ffbcd\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835770 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") pod \"7b3b0534-3356-446a-91e8-dae980c402db\" (UID: \"7b3b0534-3356-446a-91e8-dae980c402db\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.835797 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") pod \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\" (UID: \"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d\") " Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836523 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836533 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836568 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836581 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836593 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2b84g\" (UniqueName: \"kubernetes.io/projected/7b3b0534-3356-446a-91e8-dae980c402db-kube-api-access-2b84g\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836604 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wpdsv\" (UniqueName: \"kubernetes.io/projected/478dee72-717a-448e-b14d-15d600c82eb5-kube-api-access-wpdsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836615 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836623 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836633 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-99vxj\" (UniqueName: \"kubernetes.io/projected/ee77ca55-8cd0-4401-afec-9817fee5f6bb-kube-api-access-99vxj\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836644 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836653 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d797afdd-19c6-45ed-81c8-5fa31175e121-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.836664 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89h9m\" (UniqueName: \"kubernetes.io/projected/d797afdd-19c6-45ed-81c8-5fa31175e121-kube-api-access-89h9m\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.837163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx" (OuterVolumeSpecName: "kube-api-access-kj4fx") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "kube-api-access-kj4fx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.837321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca" (OuterVolumeSpecName: "client-ca") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.838146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4" (OuterVolumeSpecName: "kube-api-access-hkcw4") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "kube-api-access-hkcw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.839685 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "7b3b0534-3356-446a-91e8-dae980c402db" (UID: "7b3b0534-3356-446a-91e8-dae980c402db"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.839992 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eefb5804-82d5-488f-a5c4-5473107ffbcd" (UID: "eefb5804-82d5-488f-a5c4-5473107ffbcd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.840667 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc" (OuterVolumeSpecName: "kube-api-access-gn4jc") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "kube-api-access-gn4jc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.843885 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities" (OuterVolumeSpecName: "utilities") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.886785 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4bec6c8f-9678-463c-9e09-5b8e362f2f1b" (UID: "4bec6c8f-9678-463c-9e09-5b8e362f2f1b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.889720 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ee77ca55-8cd0-4401-afec-9817fee5f6bb" (UID: "ee77ca55-8cd0-4401-afec-9817fee5f6bb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.937970 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ee77ca55-8cd0-4401-afec-9817fee5f6bb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938007 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkcw4\" (UniqueName: \"kubernetes.io/projected/eefb5804-82d5-488f-a5c4-5473107ffbcd-kube-api-access-hkcw4\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938023 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eefb5804-82d5-488f-a5c4-5473107ffbcd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938036 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj4fx\" (UniqueName: \"kubernetes.io/projected/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-kube-api-access-kj4fx\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938045 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938054 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eefb5804-82d5-488f-a5c4-5473107ffbcd-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938063 4985 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7b3b0534-3356-446a-91e8-dae980c402db-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938075 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gn4jc\" (UniqueName: \"kubernetes.io/projected/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-kube-api-access-gn4jc\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938084 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.938096 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4bec6c8f-9678-463c-9e09-5b8e362f2f1b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.958168 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.974232 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" (UID: "f17410ee-fc07-4e6c-8262-d3dad9ca4a5d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:46 crc kubenswrapper[4985]: I0128 18:18:46.979748 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "478dee72-717a-448e-b14d-15d600c82eb5" (UID: "478dee72-717a-448e-b14d-15d600c82eb5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.027058 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.039024 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.039321 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478dee72-717a-448e-b14d-15d600c82eb5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.057752 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.150591 4985 generic.go:334] "Generic (PLEG): container finished" podID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" exitCode=0 Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.150667 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerDied","Data":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151665 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6" event={"ID":"eefb5804-82d5-488f-a5c4-5473107ffbcd","Type":"ContainerDied","Data":"5b05bb1b67bf56c71462a79b529ac2543e0047903c359f6e9fac94a35e5f7aac"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.151766 4985 scope.go:117] "RemoveContainer" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.154117 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.154568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-b5wzm" event={"ID":"7b3b0534-3356-446a-91e8-dae980c402db","Type":"ContainerDied","Data":"1e7f0e57b01f1d7574c6a758c09ab0d8248fafcd79d2a77c1cd5931c1c715640"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.172467 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-nbllw" event={"ID":"b3c2ecc0-c6a6-468b-bdcf-e84c2831a580","Type":"ContainerDied","Data":"fee5ad9c634324fb795c0ec18b20b982cec13ce8646e5a41d3259fd33ab8724c"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.172573 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-nbllw" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.181607 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-58qq5" event={"ID":"ee77ca55-8cd0-4401-afec-9817fee5f6bb","Type":"ContainerDied","Data":"29cf66044b42b3771161b4b736214738baedd3db9a4eab25aec806dff09290a6"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.181793 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-58qq5" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184112 4985 scope.go:117] "RemoveContainer" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: E0128 18:18:47.184514 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": container with ID starting with a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1 not found: ID does not exist" containerID="a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184560 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1"} err="failed to get container status \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": rpc error: code = NotFound desc = could not find container \"a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1\": container with ID starting with a8c81232aaab7a9ef114be6094c57ea9375f6e1bfbddbc446018e71aace1dcb1 not found: ID does not exist" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.184594 4985 scope.go:117] "RemoveContainer" containerID="f64a1d12ad75e551f76bff45fa2c92285d9866a9c62ac072c671399e4e78b8f6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.192148 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2zfzc" event={"ID":"478dee72-717a-448e-b14d-15d600c82eb5","Type":"ContainerDied","Data":"687d51d9587f9c808e73f6dce3d7fb729d7c957935ab306ab4a9c9ab274f7f6f"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.193189 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2zfzc" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.196186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" event={"ID":"e5f99d20-5afa-4144-b66e-9198c1d6c66d","Type":"ContainerDied","Data":"61b704f839468f67ac0c3f15e67acd552ecf612f482f58ba44a89c002ae8c45b"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.196221 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.199230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mkflh" event={"ID":"d797afdd-19c6-45ed-81c8-5fa31175e121","Type":"ContainerDied","Data":"b846c4733fcd4ae67ec3f2920b60c675130ebbfa81d38792b482dedce235cc4c"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.199525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mkflh" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.203594 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zcwgk" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.203091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zcwgk" event={"ID":"f17410ee-fc07-4e6c-8262-d3dad9ca4a5d","Type":"ContainerDied","Data":"2a41be352376fbadb1f7291b4affc279d9d298821bb817d8661c11256745bd0d"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.207506 4985 scope.go:117] "RemoveContainer" containerID="30ed9426cff32dd29f42b6c27b0db2bc04b4bceebc9ee807228b14314c6b1d45" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.213206 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.220509 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f8cf88bf9-bvxk6"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.226548 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.231485 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-vq448" event={"ID":"bebbf794-5459-4a75-bff1-92b7551d4784","Type":"ContainerDied","Data":"4227c1ef4517986db5b63f69f417525b1efc3dddfa056b58023dfaf2602681c9"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.231629 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-vq448" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.242769 4985 scope.go:117] "RemoveContainer" containerID="ea88d0096240b8b1ce3a53612acc27a9069f84f2e4c034995d9d80ba5534c382" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.243748 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkbjb" event={"ID":"4bec6c8f-9678-463c-9e09-5b8e362f2f1b","Type":"ContainerDied","Data":"7de4f851d6fd3b3bdf2435ffb6090fbd2d50bbda34ffd7c0a08f88549a7af86b"} Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.243901 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkbjb" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.254862 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-b5wzm"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.287496 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b3b0534-3356-446a-91e8-dae980c402db" path="/var/lib/kubelet/pods/7b3b0534-3356-446a-91e8-dae980c402db/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.288238 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" path="/var/lib/kubelet/pods/eefb5804-82d5-488f-a5c4-5473107ffbcd/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.288825 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff1a5336-5c99-49fa-bb89-311781866770" path="/var/lib/kubelet/pods/ff1a5336-5c99-49fa-bb89-311781866770/volumes" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.291369 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.291397 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zcwgk"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.292126 4985 scope.go:117] "RemoveContainer" containerID="5959b03d9788b40f0a702f2c357697b3ecb07a0cda1a9c0b368fd63267cd0bea" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.294847 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.297359 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5746676d8-2r8p5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.302086 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.310727 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.313471 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-58qq5"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.322570 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.323977 4985 scope.go:117] "RemoveContainer" containerID="01763e3cd2bd1b7e7c641c4d3e6204a47e371f36ee82046acaa6ead5f63ffa58" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.328027 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mkflh"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.334192 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.338761 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-nbllw"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.341280 4985 scope.go:117] "RemoveContainer" containerID="5ae5d10976e7c26eb6213f430d17c638f8547abe24f44e7063a7dba954835ef4" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.344862 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.348057 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2zfzc"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.356393 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.358956 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkbjb"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.360003 4985 scope.go:117] "RemoveContainer" containerID="f89df29bdb5f4a1ac1d8a46bc1cdba1d48b8e3013145698fb6cdebd84b29470e" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.365782 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.369598 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-vq448"] Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.381700 4985 scope.go:117] "RemoveContainer" containerID="98509779ffc57e66e6d647b66aa2cfccf18d2d4bea5c3dca3fa2e44328a38480" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.399143 4985 scope.go:117] "RemoveContainer" containerID="c6a6370de55c9f1d322d443a680768dd95b5a50ccc8cfbead3f597f6cb81b47b" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.421565 4985 scope.go:117] "RemoveContainer" containerID="5673793a26abba26b8f6d32fd5a5358bd49bc89bef0867e3813c049e8ce5af23" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.440118 4985 scope.go:117] "RemoveContainer" containerID="c20541f2a2b39f6f832606efb9edd000b3514c07a50e47d18005696fc64446ca" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.454373 4985 scope.go:117] "RemoveContainer" containerID="9a773729ce7da9456028db66191225dafec61202d13d13e3c0cf77e40d3a65a1" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.477091 4985 scope.go:117] "RemoveContainer" containerID="08c2afc11e237eab84a8f7dfaa5b0598297222c01564bf4921e004a1b405af84" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.490648 4985 scope.go:117] "RemoveContainer" containerID="1c1dfa1718d5bb120e659769c80766e3c5cedbd440f581ae9a47ced34819aecd" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.503545 4985 scope.go:117] "RemoveContainer" containerID="eece386460fc88f0d1b18e248446179390fd7a1f344e841dca3acc21b1822f34" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.512612 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.517428 4985 scope.go:117] "RemoveContainer" containerID="82b69880adf61999e4575782c5ecaafe22c81d0a0e17bab967aa245eeb683a6c" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.532053 4985 scope.go:117] "RemoveContainer" containerID="232f8967da98b027f9bf4b5329e389ea4efabb6b13f4e9043541624ffe8ba02b" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.544961 4985 scope.go:117] "RemoveContainer" containerID="31e46ecf03175187af44eda5b4ce7d1101b0c4c1d73c57a447c29b34599240ab" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.563044 4985 scope.go:117] "RemoveContainer" containerID="c3c7c834b59dec9afe12ae5cb4e24ce5d7fb7d283ff22d3d168e71ce368d578d" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.587333 4985 scope.go:117] "RemoveContainer" containerID="e42228c4ddd411e6182ff6bcd41d0e27a2e8b74487dc7087bd1ccdb69c1e91bf" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.603610 4985 scope.go:117] "RemoveContainer" containerID="3d8cc26a1796f2bc2a7c499cb4517a2ba0d12df76aaa21278ad3e99d353f0c68" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.609682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.619318 4985 scope.go:117] "RemoveContainer" containerID="f66d90e90c24d7eaca4eeddb8684aee625dffff1f85b1b4fa72af4b5c206bbee" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.635002 4985 scope.go:117] "RemoveContainer" containerID="6fbcabfceffdf85763f4008a949c3b5ecf075282566d7602a9169724a8470662" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.728776 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.770703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.773025 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.804342 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.830113 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 18:18:47 crc kubenswrapper[4985]: I0128 18:18:47.982900 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.276954 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.850976 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 18:18:48 crc kubenswrapper[4985]: I0128 18:18:48.951607 4985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.061166 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.121242 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.197874 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198181 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198202 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198222 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198230 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198236 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198270 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198277 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198287 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198293 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198299 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198305 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198311 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198317 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198327 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198334 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198349 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198359 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198365 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198373 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198379 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198387 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198393 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198401 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198408 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198420 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198426 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198436 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198443 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198450 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198457 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198465 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198472 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198480 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198486 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198495 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198501 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198512 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198519 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198525 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198543 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198550 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198556 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198565 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198572 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198579 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198585 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198593 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198600 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198616 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="extract-content" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198624 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198630 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198638 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198645 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: E0128 18:18:49.198652 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198659 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="extract-utilities" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198743 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b3b0534-3356-446a-91e8-dae980c402db" containerName="marketplace-operator" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198754 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198762 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff1a5336-5c99-49fa-bb89-311781866770" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198769 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198775 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198781 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="478dee72-717a-448e-b14d-15d600c82eb5" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="eefb5804-82d5-488f-a5c4-5473107ffbcd" containerName="controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198799 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198808 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198813 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97e98d6-b3fb-4d0b-a91e-00e4d18089c9" containerName="installer" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198821 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198828 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" containerName="registry-server" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.198836 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" containerName="route-controller-manager" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.199290 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202468 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202566 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.202814 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.203680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.204698 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.205175 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206791 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206938 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206985 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.206999 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.207140 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.207796 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.212073 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.212225 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.216486 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.269818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.269958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270001 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270094 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.270359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.272859 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="478dee72-717a-448e-b14d-15d600c82eb5" path="/var/lib/kubelet/pods/478dee72-717a-448e-b14d-15d600c82eb5/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.273982 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bec6c8f-9678-463c-9e09-5b8e362f2f1b" path="/var/lib/kubelet/pods/4bec6c8f-9678-463c-9e09-5b8e362f2f1b/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.274743 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3c2ecc0-c6a6-468b-bdcf-e84c2831a580" path="/var/lib/kubelet/pods/b3c2ecc0-c6a6-468b-bdcf-e84c2831a580/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.275865 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bebbf794-5459-4a75-bff1-92b7551d4784" path="/var/lib/kubelet/pods/bebbf794-5459-4a75-bff1-92b7551d4784/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.276459 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d797afdd-19c6-45ed-81c8-5fa31175e121" path="/var/lib/kubelet/pods/d797afdd-19c6-45ed-81c8-5fa31175e121/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.277494 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5f99d20-5afa-4144-b66e-9198c1d6c66d" path="/var/lib/kubelet/pods/e5f99d20-5afa-4144-b66e-9198c1d6c66d/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.278070 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee77ca55-8cd0-4401-afec-9817fee5f6bb" path="/var/lib/kubelet/pods/ee77ca55-8cd0-4401-afec-9817fee5f6bb/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.278614 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f17410ee-fc07-4e6c-8262-d3dad9ca4a5d" path="/var/lib/kubelet/pods/f17410ee-fc07-4e6c-8262-d3dad9ca4a5d/volumes" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.328080 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.371839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.371986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372011 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372032 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372094 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.372115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.374214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.374867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.376768 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.389944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.390478 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/4845499d-139f-4839-9f9f-4d77c7f0ae37-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.395796 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"route-controller-manager-bf849c6d6-gczxt\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.396063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cnps\" (UniqueName: \"kubernetes.io/projected/4845499d-139f-4839-9f9f-4d77c7f0ae37-kube-api-access-4cnps\") pod \"marketplace-operator-79b997595-hvkcw\" (UID: \"4845499d-139f-4839-9f9f-4d77c7f0ae37\") " pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.442051 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.458056 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.556905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.570505 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.731413 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.770657 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.892159 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 18:18:49 crc kubenswrapper[4985]: I0128 18:18:49.911394 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.018804 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-hvkcw"] Jan 28 18:18:50 crc kubenswrapper[4985]: W0128 18:18:50.023314 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4845499d_139f_4839_9f9f_4d77c7f0ae37.slice/crio-ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48 WatchSource:0}: Error finding container ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48: Status 404 returned error can't find the container with id ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48 Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.049971 4985 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.050386 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" gracePeriod=5 Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.225926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.275837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.275894 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"ac463c2c0bf66adfc9b65f50c82aeb322d76085e4cecf33f4cc8262707f86f48"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.276363 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.278049 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.278128 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerStarted","Data":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280207 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerStarted","Data":"63bfaff3938f44bf1190a7307ea884168e52b2cc5fe98c3d56e1af05c046f6ea"} Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.280691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.294318 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.305308 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podStartSLOduration=6.30528515 podStartE2EDuration="6.30528515s" podCreationTimestamp="2026-01-28 18:18:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:50.304029874 +0000 UTC m=+341.130592705" watchObservedRunningTime="2026-01-28 18:18:50.30528515 +0000 UTC m=+341.131847971" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.623438 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 18:18:50 crc kubenswrapper[4985]: I0128 18:18:50.894338 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.087119 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" podStartSLOduration=6.087096326 podStartE2EDuration="6.087096326s" podCreationTimestamp="2026-01-28 18:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:50.331522526 +0000 UTC m=+341.158085357" watchObservedRunningTime="2026-01-28 18:18:51.087096326 +0000 UTC m=+341.913659147" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089528 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: E0128 18:18:51.089795 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089818 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.089956 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.090445 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093125 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093530 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093782 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.093820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.096776 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.097366 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103017 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.103901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104008 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.104136 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.105489 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.205135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.205997 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206079 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.206195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.207572 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.208357 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.209104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.218502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.227534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"controller-manager-8658c89568-xqc66\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.250496 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.261138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.288887 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.434680 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.503656 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.540444 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.642425 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:18:51 crc kubenswrapper[4985]: W0128 18:18:51.648914 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd33a411_202c_41c4_a6b0_cf49ca4945a0.slice/crio-62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113 WatchSource:0}: Error finding container 62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113: Status 404 returned error can't find the container with id 62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113 Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.685099 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 18:18:51 crc kubenswrapper[4985]: I0128 18:18:51.867303 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.209680 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerStarted","Data":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293056 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerStarted","Data":"62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113"} Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.293627 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.297704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.313888 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" podStartSLOduration=7.31386885 podStartE2EDuration="7.31386885s" podCreationTimestamp="2026-01-28 18:18:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:18:52.31002445 +0000 UTC m=+343.136587281" watchObservedRunningTime="2026-01-28 18:18:52.31386885 +0000 UTC m=+343.140431671" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.637671 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 18:18:52 crc kubenswrapper[4985]: I0128 18:18:52.850395 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.251857 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.292817 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.294651 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.716368 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 18:18:53 crc kubenswrapper[4985]: I0128 18:18:53.774603 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.111084 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.407471 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.456898 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 18:18:54 crc kubenswrapper[4985]: I0128 18:18:54.458445 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.186456 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.254832 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.285686 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.297761 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.307965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.308013 4985 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" exitCode=137 Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.382567 4985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.518968 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.593983 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.649041 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.649117 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771042 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771226 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771398 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771276 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771327 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771462 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771731 4985 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771761 4985 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771772 4985 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.771784 4985 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.780835 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.827374 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 18:18:55 crc kubenswrapper[4985]: I0128 18:18:55.873174 4985 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.275034 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316037 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316117 4985 scope.go:117] "RemoveContainer" containerID="ab7d18f55611d02a03d62a6ebace75ed35b7b1a319a4367884bd6c2504dce01f" Jan 28 18:18:56 crc kubenswrapper[4985]: I0128 18:18:56.316181 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.264996 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270128 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270289 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.270428 4985 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.283167 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.283216 4985 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8db072-9548-45aa-92d1-61dab999c4ad" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.299253 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.299328 4985 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="2b8db072-9548-45aa-92d1-61dab999c4ad" Jan 28 18:18:57 crc kubenswrapper[4985]: I0128 18:18:57.806374 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 18:18:58 crc kubenswrapper[4985]: I0128 18:18:58.728107 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 18:18:58 crc kubenswrapper[4985]: I0128 18:18:58.843686 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 18:18:59 crc kubenswrapper[4985]: I0128 18:18:59.196771 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 18:19:01 crc kubenswrapper[4985]: I0128 18:19:01.085827 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.223960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.251553 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 18:19:03 crc kubenswrapper[4985]: I0128 18:19:03.917848 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 18:19:04 crc kubenswrapper[4985]: I0128 18:19:04.386884 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 18:19:04 crc kubenswrapper[4985]: I0128 18:19:04.899475 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.506189 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.507448 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" containerID="cri-o://b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" gracePeriod=30 Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.519040 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:05 crc kubenswrapper[4985]: I0128 18:19:05.519406 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" containerID="cri-o://176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" gracePeriod=30 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.031066 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.097312 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.109999 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214852 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214919 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.214984 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215009 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215035 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") pod \"7c43298b-f494-48e0-b307-61e702afc5ef\" (UID: \"7c43298b-f494-48e0-b307-61e702afc5ef\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215137 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") pod \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\" (UID: \"fd33a411-202c-41c4-a6b0-cf49ca4945a0\") " Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca" (OuterVolumeSpecName: "client-ca") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215796 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca" (OuterVolumeSpecName: "client-ca") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215919 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.215997 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config" (OuterVolumeSpecName: "config") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.216483 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config" (OuterVolumeSpecName: "config") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221000 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4" (OuterVolumeSpecName: "kube-api-access-xqgr4") pod "7c43298b-f494-48e0-b307-61e702afc5ef" (UID: "7c43298b-f494-48e0-b307-61e702afc5ef"). InnerVolumeSpecName "kube-api-access-xqgr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm" (OuterVolumeSpecName: "kube-api-access-qj7rm") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "kube-api-access-qj7rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.221481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fd33a411-202c-41c4-a6b0-cf49ca4945a0" (UID: "fd33a411-202c-41c4-a6b0-cf49ca4945a0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316167 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj7rm\" (UniqueName: \"kubernetes.io/projected/fd33a411-202c-41c4-a6b0-cf49ca4945a0-kube-api-access-qj7rm\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316250 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316279 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316292 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqgr4\" (UniqueName: \"kubernetes.io/projected/7c43298b-f494-48e0-b307-61e702afc5ef-kube-api-access-xqgr4\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316303 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7c43298b-f494-48e0-b307-61e702afc5ef-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316313 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fd33a411-202c-41c4-a6b0-cf49ca4945a0-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316324 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316334 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fd33a411-202c-41c4-a6b0-cf49ca4945a0-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.316344 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7c43298b-f494-48e0-b307-61e702afc5ef-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.376944 4985 generic.go:334] "Generic (PLEG): container finished" podID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" exitCode=0 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377045 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerDied","Data":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377083 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" event={"ID":"fd33a411-202c-41c4-a6b0-cf49ca4945a0","Type":"ContainerDied","Data":"62677165cf1b3b0e28ffd442affb496e27e4262d996055f35c45380f63073113"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377118 4985 scope.go:117] "RemoveContainer" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.377369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-8658c89568-xqc66" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379490 4985 generic.go:334] "Generic (PLEG): container finished" podID="7c43298b-f494-48e0-b307-61e702afc5ef" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" exitCode=0 Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerDied","Data":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" event={"ID":"7c43298b-f494-48e0-b307-61e702afc5ef","Type":"ContainerDied","Data":"63bfaff3938f44bf1190a7307ea884168e52b2cc5fe98c3d56e1af05c046f6ea"} Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.379716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.400662 4985 scope.go:117] "RemoveContainer" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: E0128 18:19:06.401245 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": container with ID starting with b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85 not found: ID does not exist" containerID="b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.401303 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85"} err="failed to get container status \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": rpc error: code = NotFound desc = could not find container \"b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85\": container with ID starting with b29056f11a11fdafa5a5af1c8ab2aee145a258f795e7d8a76f56701d4b3b3c85 not found: ID does not exist" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.401335 4985 scope.go:117] "RemoveContainer" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.419428 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.421961 4985 scope.go:117] "RemoveContainer" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: E0128 18:19:06.423667 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": container with ID starting with 176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795 not found: ID does not exist" containerID="176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.423719 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795"} err="failed to get container status \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": rpc error: code = NotFound desc = could not find container \"176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795\": container with ID starting with 176060ebc5a5029cb70a2553e85408a94eb941bcf9971d30acf0bae11d677795 not found: ID does not exist" Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.427458 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-8658c89568-xqc66"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.431305 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:06 crc kubenswrapper[4985]: I0128 18:19:06.434507 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-bf849c6d6-gczxt"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102319 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: E0128 18:19:07.102671 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102687 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: E0128 18:19:07.102878 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102885 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.102985 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" containerName="controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.103007 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" containerName="route-controller-manager" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.103570 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105285 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105589 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.105864 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106808 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.106837 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.110644 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.111725 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116038 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116042 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116174 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116439 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.116508 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.118061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.122886 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.123312 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126809 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126825 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126843 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.126945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.134936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227893 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227964 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.227999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.228081 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.229331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.229380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.230702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.231214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.231475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.234502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.234685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.253328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"route-controller-manager-cf8f7d6b6-cb5sn\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.253431 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"controller-manager-685b767c78-2pk2s\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.272501 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c43298b-f494-48e0-b307-61e702afc5ef" path="/var/lib/kubelet/pods/7c43298b-f494-48e0-b307-61e702afc5ef/volumes" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.273276 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fd33a411-202c-41c4-a6b0-cf49ca4945a0" path="/var/lib/kubelet/pods/fd33a411-202c-41c4-a6b0-cf49ca4945a0/volumes" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.373056 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.432685 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.432960 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.443340 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.562678 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.629131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:07 crc kubenswrapper[4985]: W0128 18:19:07.642056 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod015deeab_c778_426c_ae5e_c5a0ab596483.slice/crio-ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745 WatchSource:0}: Error finding container ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745: Status 404 returned error can't find the container with id ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745 Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.673423 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:07 crc kubenswrapper[4985]: I0128 18:19:07.871040 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerStarted","Data":"bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395472 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.395489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerStarted","Data":"ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398173 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerStarted","Data":"0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398212 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerStarted","Data":"105cd5f36d905cb5f852dedc6ba5310ebfc115c0484f7e113137d4c547156ef4"} Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.398473 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.406444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.415174 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" podStartSLOduration=3.4151583260000002 podStartE2EDuration="3.415158326s" podCreationTimestamp="2026-01-28 18:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:08.412375106 +0000 UTC m=+359.238937927" watchObservedRunningTime="2026-01-28 18:19:08.415158326 +0000 UTC m=+359.241721147" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.437808 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" podStartSLOduration=3.437791599 podStartE2EDuration="3.437791599s" podCreationTimestamp="2026-01-28 18:19:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:08.436731718 +0000 UTC m=+359.263294539" watchObservedRunningTime="2026-01-28 18:19:08.437791599 +0000 UTC m=+359.264354420" Jan 28 18:19:08 crc kubenswrapper[4985]: I0128 18:19:08.580464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.168395 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.782995 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.791696 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.834940 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 18:19:09 crc kubenswrapper[4985]: I0128 18:19:09.838361 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 18:19:10 crc kubenswrapper[4985]: I0128 18:19:10.005812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 18:19:10 crc kubenswrapper[4985]: I0128 18:19:10.717915 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 18:19:11 crc kubenswrapper[4985]: I0128 18:19:11.186606 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:11 crc kubenswrapper[4985]: I0128 18:19:11.186687 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.124742 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.445241 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 18:19:12 crc kubenswrapper[4985]: I0128 18:19:12.891393 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 18:19:13 crc kubenswrapper[4985]: I0128 18:19:13.212228 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 18:19:16 crc kubenswrapper[4985]: I0128 18:19:16.464344 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 18:19:17 crc kubenswrapper[4985]: I0128 18:19:17.543815 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 18:19:17 crc kubenswrapper[4985]: I0128 18:19:17.687054 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.129671 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.212638 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.392859 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.474371 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 18:19:18 crc kubenswrapper[4985]: I0128 18:19:18.919300 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 18:19:19 crc kubenswrapper[4985]: I0128 18:19:19.737825 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:20 crc kubenswrapper[4985]: I0128 18:19:20.453566 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 18:19:20 crc kubenswrapper[4985]: I0128 18:19:20.517935 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 18:19:21 crc kubenswrapper[4985]: I0128 18:19:21.469112 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 18:19:22 crc kubenswrapper[4985]: I0128 18:19:22.144311 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 18:19:22 crc kubenswrapper[4985]: I0128 18:19:22.190949 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 18:19:23 crc kubenswrapper[4985]: I0128 18:19:23.158826 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.512624 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.512976 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" containerID="cri-o://0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" gracePeriod=30 Jan 28 18:19:25 crc kubenswrapper[4985]: E0128 18:19:25.648468 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75e23934_9cb3_423f_92d4_888a740e00f3.slice/crio-0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:19:25 crc kubenswrapper[4985]: I0128 18:19:25.739059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.507950 4985 generic.go:334] "Generic (PLEG): container finished" podID="75e23934-9cb3-423f-92d4-888a740e00f3" containerID="0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" exitCode=0 Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.508067 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerDied","Data":"0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1"} Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.604709 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.710599 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.745785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773136 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:26 crc kubenswrapper[4985]: E0128 18:19:26.773384 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773399 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773526 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" containerName="controller-manager" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.773933 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.814189 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915723 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915834 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.915958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916014 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") pod \"75e23934-9cb3-423f-92d4-888a740e00f3\" (UID: \"75e23934-9cb3-423f-92d4-888a740e00f3\") " Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916314 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916412 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.916491 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.917614 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.917817 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config" (OuterVolumeSpecName: "config") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.918328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca" (OuterVolumeSpecName: "client-ca") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.923740 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:26 crc kubenswrapper[4985]: I0128 18:19:26.927424 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd" (OuterVolumeSpecName: "kube-api-access-n54nd") pod "75e23934-9cb3-423f-92d4-888a740e00f3" (UID: "75e23934-9cb3-423f-92d4-888a740e00f3"). InnerVolumeSpecName "kube-api-access-n54nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.009646 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018548 4985 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018561 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75e23934-9cb3-423f-92d4-888a740e00f3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018570 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018579 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75e23934-9cb3-423f-92d4-888a740e00f3-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.018591 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n54nd\" (UniqueName: \"kubernetes.io/projected/75e23934-9cb3-423f-92d4-888a740e00f3-kube-api-access-n54nd\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.020586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-client-ca\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.020679 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-proxy-ca-bundles\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.021989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a0590b9a-abcc-4541-9914-675dc0ca1976-config\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.023777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a0590b9a-abcc-4541-9914-675dc0ca1976-serving-cert\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.046369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvxjc\" (UniqueName: \"kubernetes.io/projected/a0590b9a-abcc-4541-9914-675dc0ca1976-kube-api-access-tvxjc\") pod \"controller-manager-656679f4c7-mmrtg\" (UID: \"a0590b9a-abcc-4541-9914-675dc0ca1976\") " pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.103768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.517975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" event={"ID":"75e23934-9cb3-423f-92d4-888a740e00f3","Type":"ContainerDied","Data":"105cd5f36d905cb5f852dedc6ba5310ebfc115c0484f7e113137d4c547156ef4"} Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.518078 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-685b767c78-2pk2s" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.518638 4985 scope.go:117] "RemoveContainer" containerID="0920456814e8166a02375b5225682a3378d326919adc34b964ae16737f8fd4a1" Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.545774 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.552240 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-685b767c78-2pk2s"] Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.598998 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-656679f4c7-mmrtg"] Jan 28 18:19:27 crc kubenswrapper[4985]: W0128 18:19:27.607160 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0590b9a_abcc_4541_9914_675dc0ca1976.slice/crio-941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35 WatchSource:0}: Error finding container 941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35: Status 404 returned error can't find the container with id 941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35 Jan 28 18:19:27 crc kubenswrapper[4985]: I0128 18:19:27.629492 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.525787 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.525836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"941c6bfd322e3e4ce80a380a1c59b768a8e5b3e90786970cef77e19ab5eb8c35"} Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.526092 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.538137 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.567051 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podStartSLOduration=3.567015363 podStartE2EDuration="3.567015363s" podCreationTimestamp="2026-01-28 18:19:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:28.542896128 +0000 UTC m=+379.369458949" watchObservedRunningTime="2026-01-28 18:19:28.567015363 +0000 UTC m=+379.393578184" Jan 28 18:19:28 crc kubenswrapper[4985]: I0128 18:19:28.928345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.262864 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.270821 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75e23934-9cb3-423f-92d4-888a740e00f3" path="/var/lib/kubelet/pods/75e23934-9cb3-423f-92d4-888a740e00f3/volumes" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.432078 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 18:19:29 crc kubenswrapper[4985]: I0128 18:19:29.713172 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.042321 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.635451 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 18:19:30 crc kubenswrapper[4985]: I0128 18:19:30.700410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.110624 4985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.925138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 18:19:31 crc kubenswrapper[4985]: I0128 18:19:31.982556 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.024925 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.168575 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.311294 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.362294 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 18:19:32 crc kubenswrapper[4985]: I0128 18:19:32.563159 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 18:19:33 crc kubenswrapper[4985]: I0128 18:19:33.549372 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 18:19:35 crc kubenswrapper[4985]: I0128 18:19:35.471308 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 18:19:35 crc kubenswrapper[4985]: I0128 18:19:35.605634 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.405081 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.849057 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 18:19:36 crc kubenswrapper[4985]: I0128 18:19:36.942522 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 18:19:41 crc kubenswrapper[4985]: I0128 18:19:41.185858 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:19:41 crc kubenswrapper[4985]: I0128 18:19:41.185954 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.923196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.925433 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.928159 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.928654 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.931529 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934277 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-dockercfg-wwt9l" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934393 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.934394 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Jan 28 18:19:42 crc kubenswrapper[4985]: I0128 18:19:42.942650 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.056707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.057141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.057280 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.158410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.159387 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/a73cc747-1671-4ae3-8784-3087a06b300c-telemetry-config\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.179728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/a73cc747-1671-4ae3-8784-3087a06b300c-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.184451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz29k\" (UniqueName: \"kubernetes.io/projected/a73cc747-1671-4ae3-8784-3087a06b300c-kube-api-access-gz29k\") pod \"cluster-monitoring-operator-6d5b84845-sxjv7\" (UID: \"a73cc747-1671-4ae3-8784-3087a06b300c\") " pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.247893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" Jan 28 18:19:43 crc kubenswrapper[4985]: I0128 18:19:43.710086 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7"] Jan 28 18:19:43 crc kubenswrapper[4985]: W0128 18:19:43.718486 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda73cc747_1671_4ae3_8784_3087a06b300c.slice/crio-85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445 WatchSource:0}: Error finding container 85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445: Status 404 returned error can't find the container with id 85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445 Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.624465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" event={"ID":"a73cc747-1671-4ae3-8784-3087a06b300c","Type":"ContainerStarted","Data":"85766ca89e109c438601bbfe442aa785dad6b81873524afe9a524ae10859e445"} Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.782350 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" containerID="cri-o://4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" gracePeriod=15 Jan 28 18:19:44 crc kubenswrapper[4985]: I0128 18:19:44.797160 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.336984 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393000 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:45 crc kubenswrapper[4985]: E0128 18:19:45.393345 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393363 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.393482 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerName="oauth-openshift" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.394003 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.399273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.490872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.490923 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491067 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491090 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491113 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.491268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492272 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492362 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492623 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") pod \"d061f6d6-1983-405d-93af-3e492ff49f7c\" (UID: \"d061f6d6-1983-405d-93af-3e492ff49f7c\") " Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492881 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492905 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492962 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.492985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493017 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493103 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493163 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493332 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493437 4985 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493452 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.493474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.494114 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.499048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.507550 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc" (OuterVolumeSpecName: "kube-api-access-jcmdc") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "kube-api-access-jcmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.507660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.508804 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.509527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510136 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510257 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.510330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.524531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.525441 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "d061f6d6-1983-405d-93af-3e492ff49f7c" (UID: "d061f6d6-1983-405d-93af-3e492ff49f7c"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.532106 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.532888 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" containerID="cri-o://bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" gracePeriod=30 Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595244 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595307 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595382 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.595429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596493 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-policies\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596594 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596648 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596701 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.596798 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-cliconfig\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597318 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-service-ca\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597356 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f077e962-d9b2-45c5-a87e-1dd03ad0378b-audit-dir\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597665 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597692 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597706 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597719 4985 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597731 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597745 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597759 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597771 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcmdc\" (UniqueName: \"kubernetes.io/projected/d061f6d6-1983-405d-93af-3e492ff49f7c-kube-api-access-jcmdc\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597792 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597804 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.597815 4985 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d061f6d6-1983-405d-93af-3e492ff49f7c-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.599028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-login\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.599540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600184 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-error\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.600733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-session\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.601565 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-serving-cert\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.602462 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-system-router-certs\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.602837 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f077e962-d9b2-45c5-a87e-1dd03ad0378b-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.612921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7frd\" (UniqueName: \"kubernetes.io/projected/f077e962-d9b2-45c5-a87e-1dd03ad0378b-kube-api-access-h7frd\") pod \"oauth-openshift-56cf947455-bgjvj\" (UID: \"f077e962-d9b2-45c5-a87e-1dd03ad0378b\") " pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649793 4985 generic.go:334] "Generic (PLEG): container finished" podID="d061f6d6-1983-405d-93af-3e492ff49f7c" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" exitCode=0 Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerDied","Data":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" event={"ID":"d061f6d6-1983-405d-93af-3e492ff49f7c","Type":"ContainerDied","Data":"92eb3ea915f09fd028998d05f1f049bc1e5781547f5807090433223897100c78"} Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.649913 4985 scope.go:117] "RemoveContainer" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.650064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-fdfqq" Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.698912 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.706797 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-fdfqq"] Jan 28 18:19:45 crc kubenswrapper[4985]: I0128 18:19:45.715552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.378287 4985 scope.go:117] "RemoveContainer" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:46 crc kubenswrapper[4985]: E0128 18:19:46.379050 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": container with ID starting with 4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5 not found: ID does not exist" containerID="4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.379086 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5"} err="failed to get container status \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": rpc error: code = NotFound desc = could not find container \"4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5\": container with ID starting with 4e030e02719f7b54e22718eb7afac73806abe0dae40f51ad7d7a32d58ebfbee5 not found: ID does not exist" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.657671 4985 generic.go:334] "Generic (PLEG): container finished" podID="015deeab-c778-426c-ae5e-c5a0ab596483" containerID="bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" exitCode=0 Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.657732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerDied","Data":"bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d"} Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.762237 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.807689 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:46 crc kubenswrapper[4985]: E0128 18:19:46.808010 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808035 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808198 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" containerName="route-controller-manager" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.808852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.885770 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-56cf947455-bgjvj"] Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921734 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921812 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.921839 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") pod \"015deeab-c778-426c-ae5e-c5a0ab596483\" (UID: \"015deeab-c778-426c-ae5e-c5a0ab596483\") " Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922054 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922093 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.922143 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.923011 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca" (OuterVolumeSpecName: "client-ca") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.923764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config" (OuterVolumeSpecName: "config") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.928711 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26" (OuterVolumeSpecName: "kube-api-access-jdd26") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "kube-api-access-jdd26". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:19:46 crc kubenswrapper[4985]: I0128 18:19:46.931053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "015deeab-c778-426c-ae5e-c5a0ab596483" (UID: "015deeab-c778-426c-ae5e-c5a0ab596483"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.025882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026079 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026174 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdd26\" (UniqueName: \"kubernetes.io/projected/015deeab-c778-426c-ae5e-c5a0ab596483-kube-api-access-jdd26\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026187 4985 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026197 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/015deeab-c778-426c-ae5e-c5a0ab596483-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.026207 4985 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/015deeab-c778-426c-ae5e-c5a0ab596483-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.029556 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-config\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.035650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/983beebe-f0c3-4fba-9861-0ea007559cc5-client-ca\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.044150 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/983beebe-f0c3-4fba-9861-0ea007559cc5-serving-cert\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.046267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27n5s\" (UniqueName: \"kubernetes.io/projected/983beebe-f0c3-4fba-9861-0ea007559cc5-kube-api-access-27n5s\") pod \"route-controller-manager-5549b68d6f-t2f7p\" (UID: \"983beebe-f0c3-4fba-9861-0ea007559cc5\") " pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.137046 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.272498 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d061f6d6-1983-405d-93af-3e492ff49f7c" path="/var/lib/kubelet/pods/d061f6d6-1983-405d-93af-3e492ff49f7c/volumes" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680107 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"0951de6b9b7fd10049d964696b15d69e2ae8d48e6cfa6f5e0697f4865e129509"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.680319 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.682980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" event={"ID":"015deeab-c778-426c-ae5e-c5a0ab596483","Type":"ContainerDied","Data":"ca4440df3d3dc1f710f5e56dad727aa67c3f72e3f5e9aa92e70564cdf46ea745"} Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.683011 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.683396 4985 scope.go:117] "RemoveContainer" containerID="bd0aba61cb8cec3bb2351d6980fecf8b4fca0c8fed2aec2a8b1c136ba370354d" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.725431 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podStartSLOduration=28.725403699 podStartE2EDuration="28.725403699s" podCreationTimestamp="2026-01-28 18:19:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:47.71149865 +0000 UTC m=+398.538061471" watchObservedRunningTime="2026-01-28 18:19:47.725403699 +0000 UTC m=+398.551966520" Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.726678 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.729830 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-cf8f7d6b6-cb5sn"] Jan 28 18:19:47 crc kubenswrapper[4985]: I0128 18:19:47.827417 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.037209 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.416812 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.418004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.423530 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.424479 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-hx5bp" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.431692 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.548299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.650043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.657694 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/81fa949b-5c24-44da-aa29-bd34bcc39d6e-tls-certificates\") pod \"prometheus-operator-admission-webhook-f54c54754-mttz8\" (UID: \"81fa949b-5c24-44da-aa29-bd34bcc39d6e\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.692608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.692672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"b53b54af51049149b33261bcc18ee5951c7a5aca757e8ef97983d99658b276f4"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.694307 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.696499 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" event={"ID":"a73cc747-1671-4ae3-8784-3087a06b300c","Type":"ContainerStarted","Data":"4d9d34679f8306214025d40e7e05333a430787a96e91ea1d0b9bfda90f1f5e96"} Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.706388 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.719660 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podStartSLOduration=3.719635791 podStartE2EDuration="3.719635791s" podCreationTimestamp="2026-01-28 18:19:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:48.714461483 +0000 UTC m=+399.541024304" watchObservedRunningTime="2026-01-28 18:19:48.719635791 +0000 UTC m=+399.546198612" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.730211 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/cluster-monitoring-operator-6d5b84845-sxjv7" podStartSLOduration=2.84696181 podStartE2EDuration="6.730192433s" podCreationTimestamp="2026-01-28 18:19:42 +0000 UTC" firstStartedPulling="2026-01-28 18:19:43.721799397 +0000 UTC m=+394.548362218" lastFinishedPulling="2026-01-28 18:19:47.60503002 +0000 UTC m=+398.431592841" observedRunningTime="2026-01-28 18:19:48.729075421 +0000 UTC m=+399.555638262" watchObservedRunningTime="2026-01-28 18:19:48.730192433 +0000 UTC m=+399.556755264" Jan 28 18:19:48 crc kubenswrapper[4985]: I0128 18:19:48.735933 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.173871 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8"] Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.271596 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="015deeab-c778-426c-ae5e-c5a0ab596483" path="/var/lib/kubelet/pods/015deeab-c778-426c-ae5e-c5a0ab596483/volumes" Jan 28 18:19:49 crc kubenswrapper[4985]: I0128 18:19:49.705302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"5ad8c6a87ba49fd9a2dede8b5f892714a6f9410e12e2ed608e32ce98f6fc28b2"} Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.718926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.719700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.727391 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 18:19:51 crc kubenswrapper[4985]: I0128 18:19:51.744085 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podStartSLOduration=2.317591498 podStartE2EDuration="3.744050592s" podCreationTimestamp="2026-01-28 18:19:48 +0000 UTC" firstStartedPulling="2026-01-28 18:19:49.187108702 +0000 UTC m=+400.013671543" lastFinishedPulling="2026-01-28 18:19:50.613567816 +0000 UTC m=+401.440130637" observedRunningTime="2026-01-28 18:19:51.737643718 +0000 UTC m=+402.564206589" watchObservedRunningTime="2026-01-28 18:19:51.744050592 +0000 UTC m=+402.570613423" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.482348 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.483468 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.486574 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.486682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-r99tt" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.488604 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.499705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.505736 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.606557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607314 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.607680 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.708991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.709108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.710208 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/70e8a5a1-0234-4693-910c-97980980b102-metrics-client-ca\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.716390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.726811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/70e8a5a1-0234-4693-910c-97980980b102-prometheus-operator-tls\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.732818 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2mglz\" (UniqueName: \"kubernetes.io/projected/70e8a5a1-0234-4693-910c-97980980b102-kube-api-access-2mglz\") pod \"prometheus-operator-db54df47d-mxz2k\" (UID: \"70e8a5a1-0234-4693-910c-97980980b102\") " pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:52 crc kubenswrapper[4985]: I0128 18:19:52.800922 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" Jan 28 18:19:53 crc kubenswrapper[4985]: I0128 18:19:53.242841 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-db54df47d-mxz2k"] Jan 28 18:19:53 crc kubenswrapper[4985]: W0128 18:19:53.250679 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70e8a5a1_0234_4693_910c_97980980b102.slice/crio-c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e WatchSource:0}: Error finding container c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e: Status 404 returned error can't find the container with id c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e Jan 28 18:19:53 crc kubenswrapper[4985]: I0128 18:19:53.734515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"c20820d2dfb0ea50e0ce5ca03f78d106a68cf341dca616ef74017b6e644b6a3e"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.302419 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.303644 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.315741 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343468 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343501 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343553 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.343995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.385157 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445679 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.445706 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.446595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/69277fd0-66c2-4094-87fd-eaa80e756e75-ca-trust-extracted\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.447723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-trusted-ca\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.447881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-certificates\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.455102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-registry-tls\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.455125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/69277fd0-66c2-4094-87fd-eaa80e756e75-installation-pull-secrets\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.465855 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-bound-sa-token\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.466843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk9d\" (UniqueName: \"kubernetes.io/projected/69277fd0-66c2-4094-87fd-eaa80e756e75-kube-api-access-qkk9d\") pod \"image-registry-66df7c8f76-77p8r\" (UID: \"69277fd0-66c2-4094-87fd-eaa80e756e75\") " pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.618975 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.776451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"4efeb3302ce3218e0f29eb596d414362b4674693cb8a67b347d35ad6f826c17e"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.776523 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" event={"ID":"70e8a5a1-0234-4693-910c-97980980b102","Type":"ContainerStarted","Data":"7bc4db6ba3d136cacf0c597a1bf4a228f3460fc9d84dc339cabe2a224d6c1072"} Jan 28 18:19:55 crc kubenswrapper[4985]: I0128 18:19:55.806849 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-db54df47d-mxz2k" podStartSLOduration=2.089136971 podStartE2EDuration="3.806819639s" podCreationTimestamp="2026-01-28 18:19:52 +0000 UTC" firstStartedPulling="2026-01-28 18:19:53.253388729 +0000 UTC m=+404.079951550" lastFinishedPulling="2026-01-28 18:19:54.971071397 +0000 UTC m=+405.797634218" observedRunningTime="2026-01-28 18:19:55.796238796 +0000 UTC m=+406.622801637" watchObservedRunningTime="2026-01-28 18:19:55.806819639 +0000 UTC m=+406.633382460" Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.105863 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-77p8r"] Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.782644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" event={"ID":"69277fd0-66c2-4094-87fd-eaa80e756e75","Type":"ContainerStarted","Data":"6bdfd07d3b55ddb6af1fcc2d993de932c84c3ee26107404883529b1bdf54dc61"} Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.782714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" event={"ID":"69277fd0-66c2-4094-87fd-eaa80e756e75","Type":"ContainerStarted","Data":"50cdbd822fd2758d9c3fa89ee4f0f4f65a8089e10e59beb3b95396b2dc9a8a5e"} Jan 28 18:19:56 crc kubenswrapper[4985]: I0128 18:19:56.807134 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podStartSLOduration=1.8071166440000002 podStartE2EDuration="1.807116644s" podCreationTimestamp="2026-01-28 18:19:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:19:56.806126896 +0000 UTC m=+407.632689727" watchObservedRunningTime="2026-01-28 18:19:56.807116644 +0000 UTC m=+407.633679465" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.788316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.977725 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.979429 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.982192 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.982546 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.989729 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-swjfk" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990074 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.990561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.991479 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.992925 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.994305 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.997184 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-w79sb" Jan 28 18:19:57 crc kubenswrapper[4985]: I0128 18:19:57.997191 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.002217 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.005847 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.009788 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-zc8rm"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.011653 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.013647 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.015489 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.016199 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-9gb4s" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.018397 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092439 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092459 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.092721 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: E0128 18:19:58.092902 4985 secret.go:188] Couldn't get secret openshift-monitoring/kube-state-metrics-tls: secret "kube-state-metrics-tls" not found Jan 28 18:19:58 crc kubenswrapper[4985]: E0128 18:19:58.092986 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls podName:75ed6fc2-db87-4a97-8c9f-1ff8451a9b73 nodeName:}" failed. No retries permitted until 2026-01-28 18:19:58.59296423 +0000 UTC m=+409.419527051 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-state-metrics-tls" (UniqueName: "kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls") pod "kube-state-metrics-777cb5bd5d-lht9f" (UID: "75ed6fc2-db87-4a97-8c9f-1ff8451a9b73") : secret "kube-state-metrics-tls" not found Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093381 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093499 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093541 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.093647 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094490 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095463 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095624 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095790 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095997 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094756 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.094750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-metrics-client-ca\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.095675 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-volume-directive-shadow\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.103781 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.118557 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvs8d\" (UniqueName: \"kubernetes.io/projected/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-api-access-hvs8d\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198157 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198212 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198319 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198405 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.198526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-root\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-sys\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199616 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-wtmp\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.199959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-textfile\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.200650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/3d51c83d-3649-47dc-84a7-696f09f28238-metrics-client-ca\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.201237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-metrics-client-ca\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.204625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.204700 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-openshift-state-metrics-tls\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.206164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-tls\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.208922 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/3d51c83d-3649-47dc-84a7-696f09f28238-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.224881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szsct\" (UniqueName: \"kubernetes.io/projected/6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7-kube-api-access-szsct\") pod \"openshift-state-metrics-566fddb674-g869q\" (UID: \"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7\") " pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.225850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxt2j\" (UniqueName: \"kubernetes.io/projected/3d51c83d-3649-47dc-84a7-696f09f28238-kube-api-access-nxt2j\") pod \"node-exporter-zc8rm\" (UID: \"3d51c83d-3649-47dc-84a7-696f09f28238\") " pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.318159 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.335705 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-zc8rm" Jan 28 18:19:58 crc kubenswrapper[4985]: W0128 18:19:58.361888 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d51c83d_3649_47dc_84a7_696f09f28238.slice/crio-acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f WatchSource:0}: Error finding container acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f: Status 404 returned error can't find the container with id acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.605334 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.611169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/75ed6fc2-db87-4a97-8c9f-1ff8451a9b73-kube-state-metrics-tls\") pod \"kube-state-metrics-777cb5bd5d-lht9f\" (UID: \"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73\") " pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.767153 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-566fddb674-g869q"] Jan 28 18:19:58 crc kubenswrapper[4985]: W0128 18:19:58.781295 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e14cd8d_2ff4_47bb_9b7f_ddc913b81ab7.slice/crio-8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0 WatchSource:0}: Error finding container 8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0: Status 404 returned error can't find the container with id 8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0 Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.794301 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"8a4b0c783facd76925fabd05c38e4ecf2c419400de8b6374771a1459ffc70ad0"} Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.796778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"acfeeeb400a349297ae14424e36b5a978881e534e9e563514bc14fc53256004f"} Jan 28 18:19:58 crc kubenswrapper[4985]: I0128 18:19:58.908055 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.070527 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.073231 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.082925 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.082973 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-4gvjp" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.083177 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.083354 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.094880 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.097428 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.099929 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.102475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.104289 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125383 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125575 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125651 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.125965 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.128946 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227147 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227204 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.227436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.228645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.228913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.230357 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1321027d-6616-4539-9eef-555f2ef23ecb-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.234624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.235131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-web-config\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.236469 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.243520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/1321027d-6616-4539-9eef-555f2ef23ecb-config-out\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.243774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-config-volume\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.244203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.244486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-tls-assets\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.247686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1321027d-6616-4539-9eef-555f2ef23ecb-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.247744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn8tz\" (UniqueName: \"kubernetes.io/projected/1321027d-6616-4539-9eef-555f2ef23ecb-kube-api-access-vn8tz\") pod \"alertmanager-main-0\" (UID: \"1321027d-6616-4539-9eef-555f2ef23ecb\") " pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.403207 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f"] Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.403462 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.813317 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"383b3b9f387929435084f59da9046b83bf2c5be1da062190b80985e07cb0f308"} Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.813799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"292b7cd50df079bb29727f9c2491c9917315f95ca7bb8f2e419a217cdab4390a"} Jan 28 18:19:59 crc kubenswrapper[4985]: I0128 18:19:59.816793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"63db952d227ebde5b3dda0cbbb8fc7d5eb81f5b1dfbd7a919ad9e688f2e163fa"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.044917 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.050292 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.053202 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054108 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054230 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054243 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054625 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-sl5xz" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.054820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-64rgvnkqk08fr" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.057304 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.074380 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.121868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141192 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141331 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141544 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141701 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.141768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243559 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.243821 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.244907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1a0dd00c-a59d-4e21-968c-b1a7b1198758-metrics-client-ca\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253434 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.253637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-grpc-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.255138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-tls\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.257942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/1a0dd00c-a59d-4e21-968c-b1a7b1198758-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.265306 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wprbf\" (UniqueName: \"kubernetes.io/projected/1a0dd00c-a59d-4e21-968c-b1a7b1198758-kube-api-access-wprbf\") pod \"thanos-querier-5695687f7c-8tcz2\" (UID: \"1a0dd00c-a59d-4e21-968c-b1a7b1198758\") " pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.371547 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.811013 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5695687f7c-8tcz2"] Jan 28 18:20:00 crc kubenswrapper[4985]: W0128 18:20:00.816029 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a0dd00c_a59d_4e21_968c_b1a7b1198758.slice/crio-f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92 WatchSource:0}: Error finding container f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92: Status 404 returned error can't find the container with id f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92 Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.826128 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d51c83d-3649-47dc-84a7-696f09f28238" containerID="28a2d278450a2c0cc5e014ee9a8495af198fadbb119e92489df408ebfcc21209" exitCode=0 Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.826210 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerDied","Data":"28a2d278450a2c0cc5e014ee9a8495af198fadbb119e92489df408ebfcc21209"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.833424 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"e4c6aa85ce23ef513dc4565b8a30dc7b0b0cf648cc0b85ecf552de24b6f2e9aa"} Jan 28 18:20:00 crc kubenswrapper[4985]: I0128 18:20:00.839327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"f9d8a5055415c952ed46b7ea6b05f1a426365e5422135a91f4a09bcd53f7cc92"} Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.726337 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.727679 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.738682 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788086 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.788696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.890862 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.890975 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891016 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891041 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.891105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892497 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.892951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.893407 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.898304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.898475 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:02 crc kubenswrapper[4985]: I0128 18:20:02.910188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"console-67787765c4-69gqs\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.051167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.346945 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.348584 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351055 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-5vgqq" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351152 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351162 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-1vakj0kiaupde" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.351884 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.352624 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.355359 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.377089 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396789 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.396934 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397009 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397035 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.397067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498506 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.498787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.499671 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-audit-log\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.500221 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.500304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-metrics-server-audit-profiles\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.503399 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-client-ca-bundle\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.507893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-server-tls\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.508636 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-secret-metrics-client-certs\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.519536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2kfm\" (UniqueName: \"kubernetes.io/projected/59d3bb7a-cda7-41ee-b0e1-9db6e930ffde-kube-api-access-w2kfm\") pod \"metrics-server-6845d579bb-9lznf\" (UID: \"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde\") " pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.765004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.764560 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.775512 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.780772 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-6tstp" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.782172 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.792272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:03 crc kubenswrapper[4985]: I0128 18:20:03.986595 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.088571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.094414 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/54abc3c0-c9d2-49a3-bc29-854369637b99-monitoring-plugin-cert\") pod \"monitoring-plugin-868c9846bf-6bwkl\" (UID: \"54abc3c0-c9d2-49a3-bc29-854369637b99\") " pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.104750 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.429241 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.431868 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.438657 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.438937 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-psvl8" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439150 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.439493 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443631 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443692 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443785 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-dji3dhnh09eo0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.443808 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.444317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.444345 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.446452 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.461823 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598446 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598495 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598539 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598580 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598594 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598637 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598697 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598720 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598803 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.598823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.617936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6845d579bb-9lznf"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700494 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700653 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700841 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700900 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.700986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.701008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.701029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.702474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.703996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712965 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.712970 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-web-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.713203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config-out\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.714637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.715053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.715996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-config\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.716627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.717715 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.720151 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.721801 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.721938 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99stz\" (UniqueName: \"kubernetes.io/projected/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-kube-api-access-99stz\") pod \"prometheus-k8s-0\" (UID: \"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9\") " pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: W0128 18:20:04.729730 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6ceb598_f81e_4169_acfd_ab2c8c776842.slice/crio-bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd WatchSource:0}: Error finding container bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd: Status 404 returned error can't find the container with id bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.738413 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl"] Jan 28 18:20:04 crc kubenswrapper[4985]: W0128 18:20:04.742119 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54abc3c0_c9d2_49a3_bc29_854369637b99.slice/crio-99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243 WatchSource:0}: Error finding container 99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243: Status 404 returned error can't find the container with id 99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243 Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.771347 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.874021 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"bcade0b67e184262ccbde20e5f5bf5c5baf7b03fe84ea271ec5e44a43d3ba1cc"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.877281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"d08ad77c9136e37a1d4202bf2af12cc700af44e341ccbbe505825f2cc0c51b8b"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.879503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerStarted","Data":"bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.881735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" event={"ID":"6e14cd8d-2ff4-47bb-9b7f-ddc913b81ab7","Type":"ContainerStarted","Data":"0c418790bdc3f1cab88023dea9fbbb624dc63764dda6954f145d0f9ccbb7443f"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.883547 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"e8ebc1be9c061cfa9d730422c9bdec2125f6bf48a63b8b299144374ad79adbc4"} Jan 28 18:20:04 crc kubenswrapper[4985]: I0128 18:20:04.884610 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" event={"ID":"54abc3c0-c9d2-49a3-bc29-854369637b99","Type":"ContainerStarted","Data":"99c213063f5c52fcb703b79b41ef61f758fee113aa443d5773092a141a5f7243"} Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.896642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerStarted","Data":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.951167 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-566fddb674-g869q" podStartSLOduration=4.074891625 podStartE2EDuration="8.951133576s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:59.211984627 +0000 UTC m=+410.038547448" lastFinishedPulling="2026-01-28 18:20:04.088226578 +0000 UTC m=+414.914789399" observedRunningTime="2026-01-28 18:20:05.924720889 +0000 UTC m=+416.751283710" watchObservedRunningTime="2026-01-28 18:20:05.951133576 +0000 UTC m=+416.777696417" Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:05.959057 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-67787765c4-69gqs" podStartSLOduration=3.959029782 podStartE2EDuration="3.959029782s" podCreationTimestamp="2026-01-28 18:20:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:20:05.944320111 +0000 UTC m=+416.770882952" watchObservedRunningTime="2026-01-28 18:20:05.959029782 +0000 UTC m=+416.785592613" Jan 28 18:20:08 crc kubenswrapper[4985]: I0128 18:20:08.966930 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Jan 28 18:20:10 crc kubenswrapper[4985]: I0128 18:20:10.934534 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"95d3dcc3dc6724c73db9e012ed32d1a45c090e852a22b2a26b9416bc53219423"} Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186069 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186151 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.186216 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.187050 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.187106 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" gracePeriod=600 Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.943904 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" exitCode=0 Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.944031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf"} Jan 28 18:20:11 crc kubenswrapper[4985]: I0128 18:20:11.944435 4985 scope.go:117] "RemoveContainer" containerID="7d78c7e918419c5ffe9f429f47849854684aa8c054910746b74404901dcafffa" Jan 28 18:20:12 crc kubenswrapper[4985]: I0128 18:20:12.951718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"a378f884ff1c0ba91e84019919ea9054d6ce5924384bb989e907966b0505fbd9"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.051440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.051613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.059748 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.964322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"0f314032d2d0dad58816b68834f071702110a56bcb3a6cd46dee7b72233c9a13"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.969933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-zc8rm" event={"ID":"3d51c83d-3649-47dc-84a7-696f09f28238","Type":"ContainerStarted","Data":"7ad94a5888c7abd7e46fcc1e071bb17e06c0684ed49fb3889ddb377fe42df8bc"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.975153 4985 generic.go:334] "Generic (PLEG): container finished" podID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerID="adbac7ee6898806b48324e26df1522d5acab80a3215e82dff7f7129f07c05432" exitCode=0 Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.975295 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerDied","Data":"adbac7ee6898806b48324e26df1522d5acab80a3215e82dff7f7129f07c05432"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.979529 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.983043 4985 generic.go:334] "Generic (PLEG): container finished" podID="1321027d-6616-4539-9eef-555f2ef23ecb" containerID="8c304a35e184693ad32049c64ace225c07a9f0acca7de0da90d9e220f5938dc4" exitCode=0 Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.983221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerDied","Data":"8c304a35e184693ad32049c64ace225c07a9f0acca7de0da90d9e220f5938dc4"} Jan 28 18:20:13 crc kubenswrapper[4985]: I0128 18:20:13.989068 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:20:14 crc kubenswrapper[4985]: I0128 18:20:13.999819 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-zc8rm" podStartSLOduration=15.637242695 podStartE2EDuration="16.999791698s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:58.365013314 +0000 UTC m=+409.191576135" lastFinishedPulling="2026-01-28 18:19:59.727562317 +0000 UTC m=+410.554125138" observedRunningTime="2026-01-28 18:20:13.994100225 +0000 UTC m=+424.820663066" watchObservedRunningTime="2026-01-28 18:20:13.999791698 +0000 UTC m=+424.826354519" Jan 28 18:20:14 crc kubenswrapper[4985]: I0128 18:20:14.145163 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:15 crc kubenswrapper[4985]: I0128 18:20:15.633515 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" Jan 28 18:20:15 crc kubenswrapper[4985]: I0128 18:20:15.740743 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.614635 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.617972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.621990 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.630503 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.729635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.813759 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.815213 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.818584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.818755 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.831625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-utilities\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.832496 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5cad9e98-172d-4053-83a3-ebee724a6d9c-catalog-content\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.866277 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqblq\" (UniqueName: \"kubernetes.io/projected/5cad9e98-172d-4053-83a3-ebee724a6d9c-kube-api-access-sqblq\") pod \"redhat-operators-5whpv\" (UID: \"5cad9e98-172d-4053-83a3-ebee724a6d9c\") " pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933617 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933706 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.933770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:16 crc kubenswrapper[4985]: I0128 18:20:16.952142 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.034982 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.035911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.036121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.053986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"certified-operators-mclkd\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.132649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.673411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5whpv"] Jan 28 18:20:17 crc kubenswrapper[4985]: I0128 18:20:17.706238 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.024875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" event={"ID":"75ed6fc2-db87-4a97-8c9f-1ff8451a9b73","Type":"ContainerStarted","Data":"184262c62d244fdfdd37aba42ec0320e853bbdc7b80e58a05161bff9dda86f7a"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.028714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" event={"ID":"54abc3c0-c9d2-49a3-bc29-854369637b99","Type":"ContainerStarted","Data":"93ac1d0cc7c88b5c3c834f75aa3e35ddcd99bc494ac09081e5c790cf3de54755"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.028942 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.032481 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"dc45b7824da10c6dc1f43a74348d32505c5f1fb53beb023d3d1f41d1deefa38f"} Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.039348 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 18:20:18 crc kubenswrapper[4985]: I0128 18:20:18.048025 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podStartSLOduration=2.6884907 podStartE2EDuration="15.048011017s" podCreationTimestamp="2026-01-28 18:20:03 +0000 UTC" firstStartedPulling="2026-01-28 18:20:04.751456798 +0000 UTC m=+415.578019609" lastFinishedPulling="2026-01-28 18:20:17.110977105 +0000 UTC m=+427.937539926" observedRunningTime="2026-01-28 18:20:18.044400774 +0000 UTC m=+428.870963595" watchObservedRunningTime="2026-01-28 18:20:18.048011017 +0000 UTC m=+428.874573838" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.004690 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.015845 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.016049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.022746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.047443 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"7551215f48c6a8439a1b9b8e99500ee1a2e82e6cca161bb1872b67e7ca8260b3"} Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.052272 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89"} Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171801 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171933 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.171990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.193351 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-777cb5bd5d-lht9f" podStartSLOduration=17.505167224 podStartE2EDuration="22.193325607s" podCreationTimestamp="2026-01-28 18:19:57 +0000 UTC" firstStartedPulling="2026-01-28 18:19:59.411301127 +0000 UTC m=+410.237863948" lastFinishedPulling="2026-01-28 18:20:04.09945951 +0000 UTC m=+414.926022331" observedRunningTime="2026-01-28 18:20:19.077620763 +0000 UTC m=+429.904183584" watchObservedRunningTime="2026-01-28 18:20:19.193325607 +0000 UTC m=+430.019888428" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.196410 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.197786 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.202980 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.213044 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273172 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273220 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273417 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273440 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273463 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273976 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-catalog-content\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.273985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d59677ee-1cc3-4635-a126-0383e56d3fc0-utilities\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.294896 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7m8\" (UniqueName: \"kubernetes.io/projected/d59677ee-1cc3-4635-a126-0383e56d3fc0-kube-api-access-9v7m8\") pod \"community-operators-z2xq5\" (UID: \"d59677ee-1cc3-4635-a126-0383e56d3fc0\") " pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.345468 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.375783 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.376392 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-catalog-content\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.376503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/478fc51e-7963-4ba3-a5ec-c2b7045b8353-utilities\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.404471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wb6bm\" (UniqueName: \"kubernetes.io/projected/478fc51e-7963-4ba3-a5ec-c2b7045b8353-kube-api-access-wb6bm\") pod \"redhat-marketplace-4fx27\" (UID: \"478fc51e-7963-4ba3-a5ec-c2b7045b8353\") " pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.528860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:19 crc kubenswrapper[4985]: I0128 18:20:19.808552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z2xq5"] Jan 28 18:20:19 crc kubenswrapper[4985]: W0128 18:20:19.818167 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd59677ee_1cc3_4635_a126_0383e56d3fc0.slice/crio-8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553 WatchSource:0}: Error finding container 8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553: Status 404 returned error can't find the container with id 8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.001249 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4fx27"] Jan 28 18:20:20 crc kubenswrapper[4985]: W0128 18:20:20.010718 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod478fc51e_7963_4ba3_a5ec_c2b7045b8353.slice/crio-31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19 WatchSource:0}: Error finding container 31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19: Status 404 returned error can't find the container with id 31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.061602 4985 generic.go:334] "Generic (PLEG): container finished" podID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerID="9fb725b7927bf308d0c769e88cf67812255b9577d22dfa62ad7023f08bc0245b" exitCode=0 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.061707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerDied","Data":"9fb725b7927bf308d0c769e88cf67812255b9577d22dfa62ad7023f08bc0245b"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.075175 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.088505 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" exitCode=0 Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.088618 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.097796 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"8683c645dd98948d2659c44693aae32885ef9dce31f0ab822a262cfa7cafa553"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.100971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"31abe548a91dfa3cf866bfa3e678a15fcc46733ebcd5a0f38cecf26186d89b19"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.108370 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"449c9e01d828adf7beba9fe6a01be63b42c205583713f4a65937700457da64d2"} Jan 28 18:20:20 crc kubenswrapper[4985]: I0128 18:20:20.123858 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podStartSLOduration=2.970504671 podStartE2EDuration="17.123830954s" podCreationTimestamp="2026-01-28 18:20:03 +0000 UTC" firstStartedPulling="2026-01-28 18:20:04.635096035 +0000 UTC m=+415.461658856" lastFinishedPulling="2026-01-28 18:20:18.788422318 +0000 UTC m=+429.614985139" observedRunningTime="2026-01-28 18:20:20.117980866 +0000 UTC m=+430.944543707" watchObservedRunningTime="2026-01-28 18:20:20.123830954 +0000 UTC m=+430.950393805" Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.119065 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="4c22c62c46381126d354905932ce4d5fa34a0b3162f09f4ea38da18f6853bedc" exitCode=0 Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.119297 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"4c22c62c46381126d354905932ce4d5fa34a0b3162f09f4ea38da18f6853bedc"} Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.124950 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="823e2b1b71b59f463d5bbf67578899e292949931e58a5f6ad2ef4edbe6d5b960" exitCode=0 Jan 28 18:20:21 crc kubenswrapper[4985]: I0128 18:20:21.125113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"823e2b1b71b59f463d5bbf67578899e292949931e58a5f6ad2ef4edbe6d5b960"} Jan 28 18:20:23 crc kubenswrapper[4985]: I0128 18:20:23.767560 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:23 crc kubenswrapper[4985]: I0128 18:20:23.768761 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.198035 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.202203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.204526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"c79998fee84ab3dc59da5883adce38f31b241d4a95cdb40df3cc765408d1dd9d"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.206266 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"90e079a4446c8b474c23d1d3b8fbedc0b9494e5d17b446ba41ad9106fe2c5b92"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.208012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378"} Jan 28 18:20:29 crc kubenswrapper[4985]: I0128 18:20:29.210303 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.224204 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.224338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.228468 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"fb8a9c2304bf6f66244b478879235230db7c610d570dea6d124039c7522384b6"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.240506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"1b877fa7d8957f795b1e4d757b81af0710f69a9bba74b471a2e41dc109f1813c"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.263618 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.263628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"7cd224f3704fa894f5a8615b761322d145f0dd17fe13bc47dafdab9320f11378"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.274016 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.274152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"db9a004e1c5a7dc3f2ee0e744da5f06fe090c8dd6d3fbb3e47a00b888ddbf7d7"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.290209 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"d88cf53b73bae3057faba92c63ccca730cfe5c01f975c73ab0f89f9a55588049"} Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.298787 4985 generic.go:334] "Generic (PLEG): container finished" podID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerID="e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61" exitCode=0 Jan 28 18:20:30 crc kubenswrapper[4985]: I0128 18:20:30.298909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerDied","Data":"e82e10f5d58ff6df3e265f1309f4b647f09e3bff2517a3cfe802376ea4837d61"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.310020 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.313955 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"e783f67f621c68c3a3e9b3123918004c596e8616a65a72e419519d463b8235a6"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.313999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" event={"ID":"1a0dd00c-a59d-4e21-968c-b1a7b1198758","Type":"ContainerStarted","Data":"cb50d6901d948ecde4675484c755ac429cbcfbe3f5906639d0d21e77b9bcc6c4"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.315482 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.318428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerStarted","Data":"d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"b4f332385a51a29e5b49b67fee7d25671a1611c41938c82f993c1577b5fb006c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"29c6df97dc0932f2f4a72f8b1540034f084814f47a2b3b915df7e42676f72b43"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.323985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"ccade00460d333725457a17c55a6a611b5d19a2d263e54b666b27cc9d7fec666"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.324000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9","Type":"ContainerStarted","Data":"12109e23795aa940c009ff928ffb111e8f0605a1b584c2c9d3d93feb16fcd92d"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"027202c651a9e5c3d0d918f93c0f13bd734f866786ea48de1f14d34578d0424c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"454dd8faa4ad50b9d7238141ecc2c0f2932b318ee28de2fa0a07bf848bd5a5d6"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"7c433791c80e7ad566bd2f670ea34379fe6553a42437dcce4fef30a3ef587d2a"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.329960 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"1321027d-6616-4539-9eef-555f2ef23ecb","Type":"ContainerStarted","Data":"17f03207c8b6d6941e2ab683982f017305e84e97bed651b24e9a28c3b1353d98"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.332722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.334827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.335427 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4fx27" podStartSLOduration=3.010725417 podStartE2EDuration="12.335415994s" podCreationTimestamp="2026-01-28 18:20:19 +0000 UTC" firstStartedPulling="2026-01-28 18:20:21.364750472 +0000 UTC m=+432.191313293" lastFinishedPulling="2026-01-28 18:20:30.689441039 +0000 UTC m=+441.516003870" observedRunningTime="2026-01-28 18:20:31.329607468 +0000 UTC m=+442.156170309" watchObservedRunningTime="2026-01-28 18:20:31.335415994 +0000 UTC m=+442.161978815" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.358889 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podStartSLOduration=2.298144067 podStartE2EDuration="31.358857226s" podCreationTimestamp="2026-01-28 18:20:00 +0000 UTC" firstStartedPulling="2026-01-28 18:20:00.82308715 +0000 UTC m=+411.649649971" lastFinishedPulling="2026-01-28 18:20:29.883800309 +0000 UTC m=+440.710363130" observedRunningTime="2026-01-28 18:20:31.353899964 +0000 UTC m=+442.180462775" watchObservedRunningTime="2026-01-28 18:20:31.358857226 +0000 UTC m=+442.185420057" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.381004 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-mclkd" podStartSLOduration=4.726471298 podStartE2EDuration="15.380970869s" podCreationTimestamp="2026-01-28 18:20:16 +0000 UTC" firstStartedPulling="2026-01-28 18:20:20.093554527 +0000 UTC m=+430.920117348" lastFinishedPulling="2026-01-28 18:20:30.748054088 +0000 UTC m=+441.574616919" observedRunningTime="2026-01-28 18:20:31.373328 +0000 UTC m=+442.199890841" watchObservedRunningTime="2026-01-28 18:20:31.380970869 +0000 UTC m=+442.207533710" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.425158 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=14.303916839 podStartE2EDuration="27.425133954s" podCreationTimestamp="2026-01-28 18:20:04 +0000 UTC" firstStartedPulling="2026-01-28 18:20:13.977962383 +0000 UTC m=+424.804525224" lastFinishedPulling="2026-01-28 18:20:27.099179518 +0000 UTC m=+437.925742339" observedRunningTime="2026-01-28 18:20:31.419095101 +0000 UTC m=+442.245657942" watchObservedRunningTime="2026-01-28 18:20:31.425133954 +0000 UTC m=+442.251696775" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.447174 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z2xq5" podStartSLOduration=3.993549186 podStartE2EDuration="13.447152875s" podCreationTimestamp="2026-01-28 18:20:18 +0000 UTC" firstStartedPulling="2026-01-28 18:20:21.361934922 +0000 UTC m=+432.188497743" lastFinishedPulling="2026-01-28 18:20:30.815538611 +0000 UTC m=+441.642101432" observedRunningTime="2026-01-28 18:20:31.44138589 +0000 UTC m=+442.267948721" watchObservedRunningTime="2026-01-28 18:20:31.447152875 +0000 UTC m=+442.273715696" Jan 28 18:20:31 crc kubenswrapper[4985]: I0128 18:20:31.472809 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=6.973572467 podStartE2EDuration="32.472785059s" podCreationTimestamp="2026-01-28 18:19:59 +0000 UTC" firstStartedPulling="2026-01-28 18:20:00.133211577 +0000 UTC m=+410.959774398" lastFinishedPulling="2026-01-28 18:20:25.632424179 +0000 UTC m=+436.458986990" observedRunningTime="2026-01-28 18:20:31.468490046 +0000 UTC m=+442.295052867" watchObservedRunningTime="2026-01-28 18:20:31.472785059 +0000 UTC m=+442.299347870" Jan 28 18:20:32 crc kubenswrapper[4985]: E0128 18:20:32.590837 4985 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Jan 28 18:20:32 crc kubenswrapper[4985]: E0128 18:20:32.591012 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0 podName:44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9 nodeName:}" failed. No retries permitted until 2026-01-28 18:20:33.090985573 +0000 UTC m=+443.917548604 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9") : configmap "prometheus-k8s-rulefiles-0" not found Jan 28 18:20:34 crc kubenswrapper[4985]: I0128 18:20:34.772821 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.441440 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5whpv" event={"ID":"5cad9e98-172d-4053-83a3-ebee724a6d9c","Type":"ContainerStarted","Data":"2a4fec7ddb6f9b88bf6eb9d3cb66a2ad0edb77691fda84f03aa283e5cf269853"} Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.486211 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5whpv" podStartSLOduration=4.47659808 podStartE2EDuration="20.486184181s" podCreationTimestamp="2026-01-28 18:20:16 +0000 UTC" firstStartedPulling="2026-01-28 18:20:20.065870954 +0000 UTC m=+430.892433775" lastFinishedPulling="2026-01-28 18:20:36.075457055 +0000 UTC m=+446.902019876" observedRunningTime="2026-01-28 18:20:36.484929995 +0000 UTC m=+447.311492816" watchObservedRunningTime="2026-01-28 18:20:36.486184181 +0000 UTC m=+447.312747022" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.953472 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:36 crc kubenswrapper[4985]: I0128 18:20:36.953814 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.133799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.133853 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.195014 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.515689 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 18:20:37 crc kubenswrapper[4985]: I0128 18:20:37.998519 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output=< Jan 28 18:20:37 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:20:37 crc kubenswrapper[4985]: > Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.195778 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" containerID="cri-o://943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" gracePeriod=15 Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.217758 4985 patch_prober.go:28] interesting pod/console-f9d7485db-b5t5k container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.218320 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-f9d7485db-b5t5k" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" probeResult="failure" output="Get \"https://10.217.0.12:8443/health\": dial tcp 10.217.0.12:8443: connect: connection refused" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.346194 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.346294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.431196 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.510028 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.530005 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.530095 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:39 crc kubenswrapper[4985]: I0128 18:20:39.574164 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477390 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477468 4985 generic.go:334] "Generic (PLEG): container finished" podID="c7f9c411-3899-4824-a051-b18ad42a950e" containerID="943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" exitCode=2 Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.477596 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerDied","Data":"943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887"} Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.521959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 18:20:40 crc kubenswrapper[4985]: I0128 18:20:40.790801 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" containerID="cri-o://2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" gracePeriod=30 Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.657372 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.657871 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855133 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855286 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855349 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.855437 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") pod \"c7f9c411-3899-4824-a051-b18ad42a950e\" (UID: \"c7f9c411-3899-4824-a051-b18ad42a950e\") " Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca" (OuterVolumeSpecName: "service-ca") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856625 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.856680 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config" (OuterVolumeSpecName: "console-config") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.862900 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.864165 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.865484 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv" (OuterVolumeSpecName: "kube-api-access-2dbkv") pod "c7f9c411-3899-4824-a051-b18ad42a950e" (UID: "c7f9c411-3899-4824-a051-b18ad42a950e"). InnerVolumeSpecName "kube-api-access-2dbkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957604 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957653 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957666 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957678 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957690 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c7f9c411-3899-4824-a051-b18ad42a950e-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957699 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dbkv\" (UniqueName: \"kubernetes.io/projected/c7f9c411-3899-4824-a051-b18ad42a950e-kube-api-access-2dbkv\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:41 crc kubenswrapper[4985]: I0128 18:20:41.957711 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c7f9c411-3899-4824-a051-b18ad42a950e-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.495215 4985 generic.go:334] "Generic (PLEG): container finished" podID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerID="2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" exitCode=0 Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.495299 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerDied","Data":"2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829"} Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.497892 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-b5t5k_c7f9c411-3899-4824-a051-b18ad42a950e/console/0.log" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.497964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-b5t5k" event={"ID":"c7f9c411-3899-4824-a051-b18ad42a950e","Type":"ContainerDied","Data":"0c4fa24c07af4cdb6a65715225f501e2d489d532f902d5a36a0225bc9b457962"} Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.498012 4985 scope.go:117] "RemoveContainer" containerID="943b5760deb612fe5b4be1e63f359ae8850d9ab9f8d1a6ec8e6e298f7bb9f887" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.498037 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-b5t5k" Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.537501 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:42 crc kubenswrapper[4985]: I0128 18:20:42.547146 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-b5t5k"] Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.279680 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" path="/var/lib/kubelet/pods/c7f9c411-3899-4824-a051-b18ad42a950e/volumes" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.774639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.781771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 18:20:43 crc kubenswrapper[4985]: I0128 18:20:43.958002 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.093756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094035 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094093 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094130 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094154 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.094328 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") pod \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\" (UID: \"23852c5a-64eb-4a56-8fbb-2e91b16a8429\") " Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.095106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.095539 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.099809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.100163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.100392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl" (OuterVolumeSpecName: "kube-api-access-ppzfl") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "kube-api-access-ppzfl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.103873 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.111946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.117932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "23852c5a-64eb-4a56-8fbb-2e91b16a8429" (UID: "23852c5a-64eb-4a56-8fbb-2e91b16a8429"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195846 4985 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195894 4985 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/23852c5a-64eb-4a56-8fbb-2e91b16a8429-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195904 4985 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/23852c5a-64eb-4a56-8fbb-2e91b16a8429-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195916 4985 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195926 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppzfl\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-kube-api-access-ppzfl\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195934 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/23852c5a-64eb-4a56-8fbb-2e91b16a8429-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.195945 4985 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/23852c5a-64eb-4a56-8fbb-2e91b16a8429-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" event={"ID":"23852c5a-64eb-4a56-8fbb-2e91b16a8429","Type":"ContainerDied","Data":"718f56cadfa73ec9c883cb72f3a4ad761b62779dbd38dd0559a00a1f1b0a3abc"} Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516440 4985 scope.go:117] "RemoveContainer" containerID="2385b533945171f57d477a41059659216495ddfbdd0280843de749e41c577829" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.516047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-4k6qp" Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.554043 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:44 crc kubenswrapper[4985]: I0128 18:20:44.559119 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-4k6qp"] Jan 28 18:20:45 crc kubenswrapper[4985]: I0128 18:20:45.272411 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" path="/var/lib/kubelet/pods/23852c5a-64eb-4a56-8fbb-2e91b16a8429/volumes" Jan 28 18:20:47 crc kubenswrapper[4985]: I0128 18:20:47.025663 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:20:47 crc kubenswrapper[4985]: I0128 18:20:47.103184 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5whpv" Jan 28 18:21:04 crc kubenswrapper[4985]: I0128 18:21:04.772779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:04 crc kubenswrapper[4985]: I0128 18:21:04.831382 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:05 crc kubenswrapper[4985]: I0128 18:21:05.732736 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.334986 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:29 crc kubenswrapper[4985]: E0128 18:21:29.336019 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336035 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: E0128 18:21:29.336065 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336072 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336209 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7f9c411-3899-4824-a051-b18ad42a950e" containerName="console" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336220 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="23852c5a-64eb-4a56-8fbb-2e91b16a8429" containerName="registry" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.336837 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.371605 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491360 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491909 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.491979 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.492005 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.593516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595618 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595643 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.595736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.607135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.607136 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.614785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"console-cd8f6d96f-p5cf4\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.660496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:29 crc kubenswrapper[4985]: I0128 18:21:29.892080 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:21:30 crc kubenswrapper[4985]: I0128 18:21:30.886129 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerStarted","Data":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} Jan 28 18:21:30 crc kubenswrapper[4985]: I0128 18:21:30.888076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerStarted","Data":"6757ef85c9af6b8087e2bbaecccf725d4d9f1d7a4e12622260f4ddbd98525b61"} Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.660771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.662556 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.670675 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.698591 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-cd8f6d96f-p5cf4" podStartSLOduration=10.698565162 podStartE2EDuration="10.698565162s" podCreationTimestamp="2026-01-28 18:21:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:21:30.920772599 +0000 UTC m=+501.747335450" watchObservedRunningTime="2026-01-28 18:21:39.698565162 +0000 UTC m=+510.525128013" Jan 28 18:21:39 crc kubenswrapper[4985]: I0128 18:21:39.969949 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:21:40 crc kubenswrapper[4985]: I0128 18:21:40.057952 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.099584 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-67787765c4-69gqs" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" containerID="cri-o://8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" gracePeriod=15 Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.541031 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67787765c4-69gqs_c6ceb598-f81e-4169-acfd-ab2c8c776842/console/0.log" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.541642 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731539 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731626 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731772 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.731837 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") pod \"c6ceb598-f81e-4169-acfd-ab2c8c776842\" (UID: \"c6ceb598-f81e-4169-acfd-ab2c8c776842\") " Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.732934 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca" (OuterVolumeSpecName: "service-ca") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config" (OuterVolumeSpecName: "console-config") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733235 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.733446 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.738888 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.739679 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.741576 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm" (OuterVolumeSpecName: "kube-api-access-d4fqm") pod "c6ceb598-f81e-4169-acfd-ab2c8c776842" (UID: "c6ceb598-f81e-4169-acfd-ab2c8c776842"). InnerVolumeSpecName "kube-api-access-d4fqm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834582 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834789 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834808 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834827 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4fqm\" (UniqueName: \"kubernetes.io/projected/c6ceb598-f81e-4169-acfd-ab2c8c776842-kube-api-access-d4fqm\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834847 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834864 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:05 crc kubenswrapper[4985]: I0128 18:22:05.834880 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/c6ceb598-f81e-4169-acfd-ab2c8c776842-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.167805 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-67787765c4-69gqs_c6ceb598-f81e-4169-acfd-ab2c8c776842/console/0.log" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.167961 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" exitCode=2 Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168044 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-67787765c4-69gqs" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerDied","Data":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168150 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-67787765c4-69gqs" event={"ID":"c6ceb598-f81e-4169-acfd-ab2c8c776842","Type":"ContainerDied","Data":"bdbe6f2aec65bc58869dd434608fa821e03e84b7c37f1ceb2deadfec161fa8fd"} Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.168223 4985 scope.go:117] "RemoveContainer" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.194705 4985 scope.go:117] "RemoveContainer" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: E0128 18:22:06.195580 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": container with ID starting with 8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f not found: ID does not exist" containerID="8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.195669 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f"} err="failed to get container status \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": rpc error: code = NotFound desc = could not find container \"8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f\": container with ID starting with 8028d4939dded7daec23c0c389b17829ce7fc711178b52dbcc1bdfade550ca2f not found: ID does not exist" Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.202988 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:06 crc kubenswrapper[4985]: I0128 18:22:06.207141 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-67787765c4-69gqs"] Jan 28 18:22:07 crc kubenswrapper[4985]: I0128 18:22:07.275769 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" path="/var/lib/kubelet/pods/c6ceb598-f81e-4169-acfd-ab2c8c776842/volumes" Jan 28 18:22:41 crc kubenswrapper[4985]: I0128 18:22:41.186224 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:22:41 crc kubenswrapper[4985]: I0128 18:22:41.187360 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:11 crc kubenswrapper[4985]: I0128 18:23:11.185735 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:23:11 crc kubenswrapper[4985]: I0128 18:23:11.186550 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.186697 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.187749 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.187843 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.189028 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.189166 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" gracePeriod=600 Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898388 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" exitCode=0 Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898439 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e"} Jan 28 18:23:41 crc kubenswrapper[4985]: I0128 18:23:41.898485 4985 scope.go:117] "RemoveContainer" containerID="593af0e54c9d9c5d6a1c9d6b82650336d416f9c59d7bd7f797ef21c62cc91daf" Jan 28 18:23:42 crc kubenswrapper[4985]: I0128 18:23:42.909652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.460739 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:16 crc kubenswrapper[4985]: E0128 18:24:16.461569 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.461581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.461702 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6ceb598-f81e-4169-acfd-ab2c8c776842" containerName="console" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.462536 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.465288 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.477358 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490593 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.490746 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.592438 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.593022 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.593301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.619744 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:16 crc kubenswrapper[4985]: I0128 18:24:16.782336 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:17 crc kubenswrapper[4985]: I0128 18:24:17.291149 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg"] Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.186858 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="45d4670b1ff63e8549d859b628e6848fe37b4078f1a01f540b83faa92b3a8bed" exitCode=0 Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.186985 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"45d4670b1ff63e8549d859b628e6848fe37b4078f1a01f540b83faa92b3a8bed"} Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.187411 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerStarted","Data":"254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308"} Jan 28 18:24:18 crc kubenswrapper[4985]: I0128 18:24:18.189212 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:24:19 crc kubenswrapper[4985]: E0128 18:24:19.439894 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ffee15_9ee0_496b_920f_87dd09fd08ec.slice/crio-6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc3ffee15_9ee0_496b_920f_87dd09fd08ec.slice/crio-conmon-6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:24:20 crc kubenswrapper[4985]: I0128 18:24:20.219160 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29" exitCode=0 Jan 28 18:24:20 crc kubenswrapper[4985]: I0128 18:24:20.219213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"6c0595555fe695769c7a9af36fd5893cfae3e92ceb1f67c90b40b527b716cd29"} Jan 28 18:24:21 crc kubenswrapper[4985]: I0128 18:24:21.229496 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerID="2a92611d01914b1660fd1dc8c220df25068014a23c7e0b8c660dc130da89e309" exitCode=0 Jan 28 18:24:21 crc kubenswrapper[4985]: I0128 18:24:21.229611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"2a92611d01914b1660fd1dc8c220df25068014a23c7e0b8c660dc130da89e309"} Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.482126 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.592878 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.592973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.593128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") pod \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\" (UID: \"c3ffee15-9ee0-496b-920f-87dd09fd08ec\") " Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.595758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle" (OuterVolumeSpecName: "bundle") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.608115 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d" (OuterVolumeSpecName: "kube-api-access-d4j6d") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "kube-api-access-d4j6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.612199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util" (OuterVolumeSpecName: "util") pod "c3ffee15-9ee0-496b-920f-87dd09fd08ec" (UID: "c3ffee15-9ee0-496b-920f-87dd09fd08ec"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695311 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695360 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4j6d\" (UniqueName: \"kubernetes.io/projected/c3ffee15-9ee0-496b-920f-87dd09fd08ec-kube-api-access-d4j6d\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:22 crc kubenswrapper[4985]: I0128 18:24:22.695383 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c3ffee15-9ee0-496b-920f-87dd09fd08ec-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" event={"ID":"c3ffee15-9ee0-496b-920f-87dd09fd08ec","Type":"ContainerDied","Data":"254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308"} Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247137 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="254f2190b65ac08c219c05e075f548fc377bb0cbde4613a62a45eaad2b561308" Jan 28 18:24:23 crc kubenswrapper[4985]: I0128 18:24:23.247221 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg" Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.517432 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518417 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" containerID="cri-o://c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518477 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518495 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" containerID="cri-o://4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518519 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" containerID="cri-o://6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518484 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" containerID="cri-o://b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518536 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" containerID="cri-o://10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.518558 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" containerID="cri-o://ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" gracePeriod=30 Jan 28 18:24:27 crc kubenswrapper[4985]: I0128 18:24:27.568562 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" containerID="cri-o://e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" gracePeriod=30 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.302039 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovnkube-controller/3.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.304398 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.304846 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305314 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305336 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305343 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305349 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" exitCode=0 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305355 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" exitCode=143 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305366 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" exitCode=143 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305389 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305467 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305495 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305509 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305522 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.305532 4985 scope.go:117] "RemoveContainer" containerID="8e29377c8dd98c4f57f6631e9fa8b7b8a821979d32249c998da8ef2191a8ffdc" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.307642 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308182 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/1.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308241 4985 generic.go:334] "Generic (PLEG): container finished" podID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" exitCode=2 Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308282 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerDied","Data":"95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535"} Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.308866 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.309143 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.331125 4985 scope.go:117] "RemoveContainer" containerID="72ecdcb1ae6951d349f0b301298f2284e9099db3a733f50ef44e4ac66a875b4c" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.767322 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.768081 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.768477 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794112 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794182 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794276 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794288 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794308 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794356 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794375 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794400 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794449 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794513 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794677 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794727 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794755 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794761 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794795 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794807 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794839 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794869 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log" (OuterVolumeSpecName: "node-log") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794925 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794932 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794975 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794945 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket" (OuterVolumeSpecName: "log-socket") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.794958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash" (OuterVolumeSpecName: "host-slash") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") pod \"bd7b8cde-d2fe-4842-857e-545172f5bd12\" (UID: \"bd7b8cde-d2fe-4842-857e-545172f5bd12\") " Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795290 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795391 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795410 4985 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795422 4985 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795434 4985 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795446 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795457 4985 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795471 4985 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795480 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795489 4985 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795497 4985 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795506 4985 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795515 4985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795523 4985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/bd7b8cde-d2fe-4842-857e-545172f5bd12-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795533 4985 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795542 4985 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795555 4985 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.795565 4985 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.803627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd" (OuterVolumeSpecName: "kube-api-access-ktbbd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "kube-api-access-ktbbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.821546 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.831627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "bd7b8cde-d2fe-4842-857e-545172f5bd12" (UID: "bd7b8cde-d2fe-4842-857e-545172f5bd12"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901632 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/bd7b8cde-d2fe-4842-857e-545172f5bd12-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901672 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ktbbd\" (UniqueName: \"kubernetes.io/projected/bd7b8cde-d2fe-4842-857e-545172f5bd12-kube-api-access-ktbbd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.901683 4985 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/bd7b8cde-d2fe-4842-857e-545172f5bd12-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.960914 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-t7xb2"] Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961186 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961201 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961211 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961270 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961283 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961294 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961300 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961305 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961314 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961319 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961325 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961331 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961342 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961348 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961363 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961373 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961383 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="pull" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961389 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="pull" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961403 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="util" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961409 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="util" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961419 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961425 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961434 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kubecfg-setup" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961440 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kubecfg-setup" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961448 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961453 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961461 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961467 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961605 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="sbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961617 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961635 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961643 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="northd" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961650 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961659 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961667 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="nbdb" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961678 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3ffee15-9ee0-496b-920f-87dd09fd08ec" containerName="extract" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961685 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="kube-rbac-proxy-node" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961696 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-acl-logging" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961705 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovn-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.961812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961821 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.961947 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: E0128 18:24:28.962095 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.962106 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerName="ovnkube-controller" Jan 28 18:24:28 crc kubenswrapper[4985]: I0128 18:24:28.963969 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002874 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.002893 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003213 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003451 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003818 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003932 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.003989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004102 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.004164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105468 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105516 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105596 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105655 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105694 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-ovn\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.106416 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-bin\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-script-lib\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107483 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-var-lib-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107530 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107582 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-netns\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107626 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-etc-openvswitch\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.105732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-node-log\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107695 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107724 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107763 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107790 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107816 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107858 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107891 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107917 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.107962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108036 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovnkube-config\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-systemd-units\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-run-systemd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.108967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-kubelet\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5eaf2e7f-83ab-438b-8de3-75886a97ada4-env-overrides\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-run-ovn-kubernetes\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.109986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-log-socket\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.110077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-cni-netd\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.110106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5eaf2e7f-83ab-438b-8de3-75886a97ada4-host-slash\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.112788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5eaf2e7f-83ab-438b-8de3-75886a97ada4-ovn-node-metrics-cert\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.124522 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fflxj\" (UniqueName: \"kubernetes.io/projected/5eaf2e7f-83ab-438b-8de3-75886a97ada4-kube-api-access-fflxj\") pod \"ovnkube-node-t7xb2\" (UID: \"5eaf2e7f-83ab-438b-8de3-75886a97ada4\") " pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.287608 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.322659 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-acl-logging/0.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323065 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-zd8w7_bd7b8cde-d2fe-4842-857e-545172f5bd12/ovn-controller/0.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323437 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" exitCode=0 Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323462 4985 generic.go:334] "Generic (PLEG): container finished" podID="bd7b8cde-d2fe-4842-857e-545172f5bd12" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" exitCode=0 Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323518 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323546 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" event={"ID":"bd7b8cde-d2fe-4842-857e-545172f5bd12","Type":"ContainerDied","Data":"9117799cf1251ac2e6249271f6bb1afef404c88ff5ec539853a26094bc4a4ad3"} Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323577 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.323734 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zd8w7" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.330567 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.348071 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.392486 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.429816 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.432406 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.439843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zd8w7"] Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.492384 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.544555 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.601029 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.625015 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eaf2e7f_83ab_438b_8de3_75886a97ada4.slice/crio-conmon-40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5eaf2e7f_83ab_438b_8de3_75886a97ada4.slice/crio-40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.636507 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.704457 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.739569 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.741265 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.741300 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} err="failed to get container status \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.741328 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.743576 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.743600 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} err="failed to get container status \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.743615 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.745841 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.745886 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} err="failed to get container status \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.745914 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.746172 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.746210 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} err="failed to get container status \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.746226 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749457 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749498 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} err="failed to get container status \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749513 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749731 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749769 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} err="failed to get container status \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.749786 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.749983 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750003 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} err="failed to get container status \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750033 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.750237 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750355 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} err="failed to get container status \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750369 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: E0128 18:24:29.750591 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750610 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} err="failed to get container status \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750652 4985 scope.go:117] "RemoveContainer" containerID="e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750821 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154"} err="failed to get container status \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": rpc error: code = NotFound desc = could not find container \"e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154\": container with ID starting with e5c7f312f69c421799114a2cc706038ae54a33d5da0d2bdf5eb4062f66508154 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.750839 4985 scope.go:117] "RemoveContainer" containerID="10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751026 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049"} err="failed to get container status \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": rpc error: code = NotFound desc = could not find container \"10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049\": container with ID starting with 10f783704bbdfb9c7db66301bdd826bf41a4dbc8250352c322889b59b9460049 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751066 4985 scope.go:117] "RemoveContainer" containerID="b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751227 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290"} err="failed to get container status \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": rpc error: code = NotFound desc = could not find container \"b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290\": container with ID starting with b696860811fbf96517b121183b44cdce8e4d1a58247233aa0bd78ab0fee44290 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751244 4985 scope.go:117] "RemoveContainer" containerID="4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751445 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022"} err="failed to get container status \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": rpc error: code = NotFound desc = could not find container \"4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022\": container with ID starting with 4c8e3143d29c36a10de45cb53e834a9b07c3bc3649399e4ebe300d6d0b402022 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751495 4985 scope.go:117] "RemoveContainer" containerID="7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751690 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493"} err="failed to get container status \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": rpc error: code = NotFound desc = could not find container \"7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493\": container with ID starting with 7451a9b7105897941e9b5b8de3418017338a5bb8c8af09a6d6601b53351f1493 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751736 4985 scope.go:117] "RemoveContainer" containerID="6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751930 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4"} err="failed to get container status \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": rpc error: code = NotFound desc = could not find container \"6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4\": container with ID starting with 6c8fe72265d8e04b63c12d4a95af4336fc264f1e3dbcc639f18b6b31821181a4 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.751972 4985 scope.go:117] "RemoveContainer" containerID="ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752139 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07"} err="failed to get container status \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": rpc error: code = NotFound desc = could not find container \"ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07\": container with ID starting with ce6def547fc685d47f8d372ebe892dd6ffc160b34a74f8cdc1183b791d05bf07 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752161 4985 scope.go:117] "RemoveContainer" containerID="c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752376 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2"} err="failed to get container status \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": rpc error: code = NotFound desc = could not find container \"c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2\": container with ID starting with c1ba661e0f5c712979b5bcfce9e0bcd1ea88586f042266a599d5107d0425f9b2 not found: ID does not exist" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752393 4985 scope.go:117] "RemoveContainer" containerID="da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13" Jan 28 18:24:29 crc kubenswrapper[4985]: I0128 18:24:29.752571 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13"} err="failed to get container status \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": rpc error: code = NotFound desc = could not find container \"da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13\": container with ID starting with da1cb4c75349541b0dfee050b82d19c63ed0cd4d6684860ef2fc4c6d8f2f7f13 not found: ID does not exist" Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337526 4985 generic.go:334] "Generic (PLEG): container finished" podID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerID="40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791" exitCode=0 Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerDied","Data":"40403e856521d655954c572d23f008f7a413527effb3b0ae52c77869649a3791"} Jan 28 18:24:30 crc kubenswrapper[4985]: I0128 18:24:30.337592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"b7175ec38ee5684e88d07daad8a37cb7e95b9291762bbeff20ca302d93347d51"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.273965 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7b8cde-d2fe-4842-857e-545172f5bd12" path="/var/lib/kubelet/pods/bd7b8cde-d2fe-4842-857e-545172f5bd12/volumes" Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"677d53264845f1178736ce4c75b59139b9435a9d9962fc83fd5f67f7cb8c74e4"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"1a021b7cb135439167793d3a9270e28bd03b752b3dfbea56473b20c8b53e64a2"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360506 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"95d9f4be877a771d4082a16a854680569ae96249433bd2133eb0bf3ba433741d"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360519 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"922936e9ef6c305256663e7c5e2628237c01472b317ba492282a9bb9fec0a09e"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"c1fd07714381094ef88219d7d1ece4e146a19f50355bf88e062e6ee355789b5b"} Jan 28 18:24:31 crc kubenswrapper[4985]: I0128 18:24:31.360541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"c09c05a924359342e91a3cb914a3154fe8936ccd9528071be9bc8e0c570f5495"} Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.891241 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.892954 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.894641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-496gd" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.895050 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.895727 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 28 18:24:33 crc kubenswrapper[4985]: I0128 18:24:33.992560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.010669 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.011799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.015001 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.015037 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-xcf75" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.025317 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.026339 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094212 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094324 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.094492 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.120652 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.128224 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmwvw\" (UniqueName: \"kubernetes.io/projected/74fbf9d6-ccb4-4d90-9db8-2d4613334d81-kube-api-access-tmwvw\") pod \"obo-prometheus-operator-68bc856cb9-s9875\" (UID: \"74fbf9d6-ccb4-4d90-9db8-2d4613334d81\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.140041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.142332 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.142582 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-2fmlf" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196634 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196824 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.196881 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.200700 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.200780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/23ef5df5-bfbe-4465-8e87-d69896bf70aa-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb\" (UID: \"23ef5df5-bfbe-4465-8e87-d69896bf70aa\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.202843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.211986 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.213879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e192375e-5db5-46e4-922b-21b8bc5698ba-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n\" (UID: \"e192375e-5db5-46e4-922b-21b8bc5698ba\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249011 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249090 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249125 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.249180 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(29faabad969d76e1bc86a7032b8f52d0bfaa8ecd6ae885d70b138808bd732c18): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.299272 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.299347 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.308396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/a23ac89d-75e4-4511-afaa-ef9d6205a672-observability-operator-tls\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.314262 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.315106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.317547 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-625jx" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.330875 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.334074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwxm6\" (UniqueName: \"kubernetes.io/projected/a23ac89d-75e4-4511-afaa-ef9d6205a672-kube-api-access-vwxm6\") pod \"observability-operator-59bdc8b94-nfhqj\" (UID: \"a23ac89d-75e4-4511-afaa-ef9d6205a672\") " pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.346807 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363059 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363178 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363205 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.363292 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(208e09d05b8d14b5ecd6ae1f1eff9c4a121eb4be05af6654fb7b06e8385ea0c7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.386781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"1758414e768b7ec440bcc7b839d9210e2b1b2c9efc4ac671be293450005b4f3e"} Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390832 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390882 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390908 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.390972 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(a72ecb34afecfb553c70190416fdae983240fe461836bfd976c95203f59652a7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.400888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.400990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.463186 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489412 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489500 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489535 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.489592 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9b0522f93b41c249bf97c577b9df67d08e489c2e0c55f5a5a5fdd1f981d5ab29): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.502101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.502192 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.504127 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/971845b8-805d-4b4a-a8fd-14f263f17695-openshift-service-ca\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.526227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69m2l\" (UniqueName: \"kubernetes.io/projected/971845b8-805d-4b4a-a8fd-14f263f17695-kube-api-access-69m2l\") pod \"perses-operator-5bf474d74f-j7z4h\" (UID: \"971845b8-805d-4b4a-a8fd-14f263f17695\") " pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: I0128 18:24:34.630480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657633 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657724 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657756 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:34 crc kubenswrapper[4985]: E0128 18:24:34.657814 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(96d9482064eef2c89d186774f6e3582ef0c84d0063bf78e7c74b2cce3005d96d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.415628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" event={"ID":"5eaf2e7f-83ab-438b-8de3-75886a97ada4","Type":"ContainerStarted","Data":"6eb47f3ff933b2a42e76298fe1e2b19e90ff72f7c98741de60d3cf30a481c54f"} Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.416059 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.416074 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.460821 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.464901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podStartSLOduration=8.464879861 podStartE2EDuration="8.464879861s" podCreationTimestamp="2026-01-28 18:24:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:24:36.462126543 +0000 UTC m=+687.288689374" watchObservedRunningTime="2026-01-28 18:24:36.464879861 +0000 UTC m=+687.291442682" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577119 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577278 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.577853 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.584716 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.584863 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.585402 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.591772 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.591947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.592478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.596577 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.596728 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.597232 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.606852 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.606989 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: I0128 18:24:36.607672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640835 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640930 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.640963 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.641023 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(93eeb652b0048b5817f30a43ddfc31c2a9f63710993025b796134cd5ebee29f0): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656502 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656592 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656626 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.656695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e0d8bd566b6792d29d17f2969f8e0d616138b6f6b3042e6e30b08de9fc377ab9): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662808 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662876 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662905 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.662957 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(7b6e02a522756e55ef713e4083f235de23ac9f59cdd5fb64b1d6881b2c7fb62f): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673514 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673592 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673615 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.673676 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(1bac1c1ab1c5a0cc011606031d335e11b6612bcdd7cf56720dceff8ff1c16c2b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689131 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689222 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689265 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:36 crc kubenswrapper[4985]: E0128 18:24:36.689321 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(8dd30a144e1329e37cc6303e45e1967666f6b414a0da21e86ac70951b01895f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:37 crc kubenswrapper[4985]: I0128 18:24:37.421498 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:37 crc kubenswrapper[4985]: I0128 18:24:37.462973 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:44 crc kubenswrapper[4985]: I0128 18:24:44.264072 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:44 crc kubenswrapper[4985]: E0128 18:24:44.264894 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-g2g4k_openshift-multus(14fdd73a-b8dd-42da-88b4-2ccb314c4f7a)\"" pod="openshift-multus/multus-g2g4k" podUID="14fdd73a-b8dd-42da-88b4-2ccb314c4f7a" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.265494 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.266448 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.266799 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: I0128 18:24:48.267545 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317061 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317156 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317187 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.317265 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-j7z4h_openshift-operators(971845b8-805d-4b4a-a8fd-14f263f17695)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-j7z4h_openshift-operators_971845b8-805d-4b4a-a8fd-14f263f17695_0(32957508a4349a8716a3b426572e91919ea6dad7bc003100cd1b52576f895b17): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326822 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326901 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326931 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.326985 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(baf9d00a47d24313a5e6e14f1fa1f183055632ef953c50227b2826fa94cd3259): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334528 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334611 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334630 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:48 crc kubenswrapper[4985]: E0128 18:24:48.334678 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(4b9061de8cbd45b11b8cf52b9c5668829e3f5e47aae2c86d96cfd48ccb2ef1e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264014 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.264964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: I0128 18:24:49.265235 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309378 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309460 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309484 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.309532 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-s9875_openshift-operators(74fbf9d6-ccb4-4d90-9db8-2d4613334d81)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-s9875_openshift-operators_74fbf9d6-ccb4-4d90-9db8-2d4613334d81_0(e83055780f4bfb5c9cdadf9ace8447600e2e32d94acfeef5e58bcc7143e0d175): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podUID="74fbf9d6-ccb4-4d90-9db8-2d4613334d81" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320728 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320804 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320827 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:24:49 crc kubenswrapper[4985]: E0128 18:24:49.320892 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators(e192375e-5db5-46e4-922b-21b8bc5698ba)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_openshift-operators_e192375e-5db5-46e4-922b-21b8bc5698ba_0(fe682c1137c94f49d1de3af096b59ea625a95faef4725635a32ea2943ad3f55a): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podUID="e192375e-5db5-46e4-922b-21b8bc5698ba" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.263171 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.264195 4985 scope.go:117] "RemoveContainer" containerID="95eb50bd0d67db39cc80a75d4b4c5fb2e77de46dc2c84556d599c22d07b3f535" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.264315 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.299730 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300118 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300141 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:24:59 crc kubenswrapper[4985]: E0128 18:24:59.300198 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators(23ef5df5-bfbe-4465-8e87-d69896bf70aa)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_openshift-operators_23ef5df5-bfbe-4465-8e87-d69896bf70aa_0(8c4b00ae33fc1b763a4b5ef80dc9bae0cf6a6bba7db48d666e515829c5e36743): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podUID="23ef5df5-bfbe-4465-8e87-d69896bf70aa" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.333389 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.580282 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g2g4k_14fdd73a-b8dd-42da-88b4-2ccb314c4f7a/kube-multus/2.log" Jan 28 18:24:59 crc kubenswrapper[4985]: I0128 18:24:59.580331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g2g4k" event={"ID":"14fdd73a-b8dd-42da-88b4-2ccb314c4f7a","Type":"ContainerStarted","Data":"2fa855b376b5c1a8660d9a5849aee571e5d3906bf3e0683c102e56cd4407bf6a"} Jan 28 18:25:00 crc kubenswrapper[4985]: I0128 18:25:00.263640 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: I0128 18:25:00.264496 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299535 4985 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299598 4985 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299624 4985 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:00 crc kubenswrapper[4985]: E0128 18:25:00.299675 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-nfhqj_openshift-operators(a23ac89d-75e4-4511-afaa-ef9d6205a672)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-nfhqj_openshift-operators_a23ac89d-75e4-4511-afaa-ef9d6205a672_0(9701ed88a259d2af5b5a43e02af66ac3bbd05f98aa0234f03af5407a23824f45): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.263701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.263895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.267091 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.267239 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" Jan 28 18:25:01 crc kubenswrapper[4985]: W0128 18:25:01.693156 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74fbf9d6_ccb4_4d90_9db8_2d4613334d81.slice/crio-add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353 WatchSource:0}: Error finding container add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353: Status 404 returned error can't find the container with id add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353 Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.693630 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-s9875"] Jan 28 18:25:01 crc kubenswrapper[4985]: I0128 18:25:01.732242 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n"] Jan 28 18:25:01 crc kubenswrapper[4985]: W0128 18:25:01.737425 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode192375e_5db5_46e4_922b_21b8bc5698ba.slice/crio-30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525 WatchSource:0}: Error finding container 30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525: Status 404 returned error can't find the container with id 30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525 Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.263449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.264052 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.517237 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-j7z4h"] Jan 28 18:25:02 crc kubenswrapper[4985]: W0128 18:25:02.525766 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod971845b8_805d_4b4a_a8fd_14f263f17695.slice/crio-07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564 WatchSource:0}: Error finding container 07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564: Status 404 returned error can't find the container with id 07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564 Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.598198 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" event={"ID":"74fbf9d6-ccb4-4d90-9db8-2d4613334d81","Type":"ContainerStarted","Data":"add82309d50d78a30022b16e5a3839e0440e8dabfb9fffdeb5835a3f9c201353"} Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.600793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" event={"ID":"e192375e-5db5-46e4-922b-21b8bc5698ba","Type":"ContainerStarted","Data":"30d4ef74c8ac24a8cb23ef26e33466ad601d4cca6b68ee6d57910df3583be525"} Jan 28 18:25:02 crc kubenswrapper[4985]: I0128 18:25:02.605492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" event={"ID":"971845b8-805d-4b4a-a8fd-14f263f17695","Type":"ContainerStarted","Data":"07407725e386e35b6df2f030849dc111c0520473845e5f97965a659a2ca7d564"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.650459 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" event={"ID":"971845b8-805d-4b4a-a8fd-14f263f17695","Type":"ContainerStarted","Data":"7c5ad487890dc7f8cf939d3bf62e5a7d4cfbe598079616ba846dec6e2e0d74d4"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.651223 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.653017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" event={"ID":"74fbf9d6-ccb4-4d90-9db8-2d4613334d81","Type":"ContainerStarted","Data":"6970029b0a83996e485f6e97e90fa6a4a4dc35f84627861d74e3045341f5e7c8"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.656017 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" event={"ID":"e192375e-5db5-46e4-922b-21b8bc5698ba","Type":"ContainerStarted","Data":"6ab744b3faa2dcd6a5678b4286389247407f71b5138248269e9852af1dd3926d"} Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.683948 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podStartSLOduration=29.075409269 podStartE2EDuration="34.683928119s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:02.528659886 +0000 UTC m=+713.355222707" lastFinishedPulling="2026-01-28 18:25:08.137178736 +0000 UTC m=+718.963741557" observedRunningTime="2026-01-28 18:25:08.679420072 +0000 UTC m=+719.505982893" watchObservedRunningTime="2026-01-28 18:25:08.683928119 +0000 UTC m=+719.510490940" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.702792 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n" podStartSLOduration=28.304722169 podStartE2EDuration="34.702773581s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:01.739539235 +0000 UTC m=+712.566102056" lastFinishedPulling="2026-01-28 18:25:08.137590647 +0000 UTC m=+718.964153468" observedRunningTime="2026-01-28 18:25:08.698173961 +0000 UTC m=+719.524736802" watchObservedRunningTime="2026-01-28 18:25:08.702773581 +0000 UTC m=+719.529336402" Jan 28 18:25:08 crc kubenswrapper[4985]: I0128 18:25:08.725003 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-s9875" podStartSLOduration=29.302884076 podStartE2EDuration="35.724985327s" podCreationTimestamp="2026-01-28 18:24:33 +0000 UTC" firstStartedPulling="2026-01-28 18:25:01.696087729 +0000 UTC m=+712.522650550" lastFinishedPulling="2026-01-28 18:25:08.11818898 +0000 UTC m=+718.944751801" observedRunningTime="2026-01-28 18:25:08.720400338 +0000 UTC m=+719.546963169" watchObservedRunningTime="2026-01-28 18:25:08.724985327 +0000 UTC m=+719.551548148" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.272208 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.273863 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.634203 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 18:25:14 crc kubenswrapper[4985]: I0128 18:25:14.797900 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-nfhqj"] Jan 28 18:25:14 crc kubenswrapper[4985]: W0128 18:25:14.818446 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda23ac89d_75e4_4511_afaa_ef9d6205a672.slice/crio-f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689 WatchSource:0}: Error finding container f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689: Status 404 returned error can't find the container with id f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689 Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.263959 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.264890 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.531525 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb"] Jan 28 18:25:15 crc kubenswrapper[4985]: W0128 18:25:15.541855 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod23ef5df5_bfbe_4465_8e87_d69896bf70aa.slice/crio-bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63 WatchSource:0}: Error finding container bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63: Status 404 returned error can't find the container with id bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63 Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.699368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" event={"ID":"23ef5df5-bfbe-4465-8e87-d69896bf70aa","Type":"ContainerStarted","Data":"bd0d94c9b1401faa512cfae652ff118614958312c37dcfd0ffca0410295b4b63"} Jan 28 18:25:15 crc kubenswrapper[4985]: I0128 18:25:15.700644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"f558dc2e9ddb82cd5fc588a21b26ed4ae91ab8f2b135f922d2095a11ecd2c689"} Jan 28 18:25:16 crc kubenswrapper[4985]: I0128 18:25:16.710312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" event={"ID":"23ef5df5-bfbe-4465-8e87-d69896bf70aa","Type":"ContainerStarted","Data":"406e4cb8be88297103d4ce975fe592879d793a5f6960baaa20428a386b377277"} Jan 28 18:25:16 crc kubenswrapper[4985]: I0128 18:25:16.735344 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb" podStartSLOduration=43.735326682 podStartE2EDuration="43.735326682s" podCreationTimestamp="2026-01-28 18:24:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:25:16.730389812 +0000 UTC m=+727.556952643" watchObservedRunningTime="2026-01-28 18:25:16.735326682 +0000 UTC m=+727.561889503" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.731546 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.731925 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.732796 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.732860 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 18:25:19 crc kubenswrapper[4985]: I0128 18:25:19.762888 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podStartSLOduration=41.040507792 podStartE2EDuration="45.762866675s" podCreationTimestamp="2026-01-28 18:24:34 +0000 UTC" firstStartedPulling="2026-01-28 18:25:14.823215593 +0000 UTC m=+725.649778414" lastFinishedPulling="2026-01-28 18:25:19.545574466 +0000 UTC m=+730.372137297" observedRunningTime="2026-01-28 18:25:19.758051889 +0000 UTC m=+730.584614710" watchObservedRunningTime="2026-01-28 18:25:19.762866675 +0000 UTC m=+730.589429496" Jan 28 18:25:20 crc kubenswrapper[4985]: I0128 18:25:20.742200 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.752539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.753889 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.755525 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.756319 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-5vjds" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.756432 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.764187 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.765058 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.767882 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-rz7bt" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.773405 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.791480 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.792349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.794450 4985 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-h7sp5" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.805156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.812685 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823663 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.823691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.924938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.925292 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.925324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.943239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvthm\" (UniqueName: \"kubernetes.io/projected/4f9db9b6-ec43-4789-9efd-f2d4831c67e8-kube-api-access-bvthm\") pod \"cert-manager-858654f9db-dzhtm\" (UID: \"4f9db9b6-ec43-4789-9efd-f2d4831c67e8\") " pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.948652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5fpp\" (UniqueName: \"kubernetes.io/projected/aa962965-4b70-40f4-8400-b7ff2ec182e9-kube-api-access-w5fpp\") pod \"cert-manager-cainjector-cf98fcc89-bcvwj\" (UID: \"aa962965-4b70-40f4-8400-b7ff2ec182e9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:30 crc kubenswrapper[4985]: I0128 18:25:30.952086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hh7sq\" (UniqueName: \"kubernetes.io/projected/26777afd-4d9f-4ebb-b8ed-0be018fa5a17-kube-api-access-hh7sq\") pod \"cert-manager-webhook-687f57d79b-mwrk6\" (UID: \"26777afd-4d9f-4ebb-b8ed-0be018fa5a17\") " pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.071900 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.079948 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-dzhtm" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.107294 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.315444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-dzhtm"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.593896 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.600114 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-mwrk6"] Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.824826 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"bfc419325b88b224232769b53268124515c8a3deadb7bd3dd62760b7baa1bc3a"} Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.825959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" event={"ID":"aa962965-4b70-40f4-8400-b7ff2ec182e9","Type":"ContainerStarted","Data":"120c9843c75cf09029347e11e4e79ad5ca84e673294a12475d6627389a1b60c1"} Jan 28 18:25:31 crc kubenswrapper[4985]: I0128 18:25:31.827084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzhtm" event={"ID":"4f9db9b6-ec43-4789-9efd-f2d4831c67e8","Type":"ContainerStarted","Data":"6d2900cc8d8154d9389303f37c292e434e83acf2dca78c8e9012754b8db7f450"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.868622 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-dzhtm" event={"ID":"4f9db9b6-ec43-4789-9efd-f2d4831c67e8","Type":"ContainerStarted","Data":"db09f7747f41e7c5012f23ee3ad3a5e9ac0c27fae2a1dd084ad0d5f9ecde13be"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.869922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.870094 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.886812 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-dzhtm" podStartSLOduration=2.867801993 podStartE2EDuration="7.886793623s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.326328259 +0000 UTC m=+742.152891080" lastFinishedPulling="2026-01-28 18:25:36.345319889 +0000 UTC m=+747.171882710" observedRunningTime="2026-01-28 18:25:37.884142318 +0000 UTC m=+748.710705139" watchObservedRunningTime="2026-01-28 18:25:37.886793623 +0000 UTC m=+748.713356454" Jan 28 18:25:37 crc kubenswrapper[4985]: I0128 18:25:37.905619 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podStartSLOduration=3.129742732 podStartE2EDuration="7.905594933s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.608042266 +0000 UTC m=+742.434605087" lastFinishedPulling="2026-01-28 18:25:36.383894467 +0000 UTC m=+747.210457288" observedRunningTime="2026-01-28 18:25:37.902887357 +0000 UTC m=+748.729450188" watchObservedRunningTime="2026-01-28 18:25:37.905594933 +0000 UTC m=+748.732157754" Jan 28 18:25:38 crc kubenswrapper[4985]: I0128 18:25:38.877995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" event={"ID":"aa962965-4b70-40f4-8400-b7ff2ec182e9","Type":"ContainerStarted","Data":"b87ebcf07463fd8c12859cde5e70b6fb80a7592a6f699d9b3da5c0069d2af80a"} Jan 28 18:25:38 crc kubenswrapper[4985]: I0128 18:25:38.895159 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-bcvwj" podStartSLOduration=2.372690296 podStartE2EDuration="8.895136238s" podCreationTimestamp="2026-01-28 18:25:30 +0000 UTC" firstStartedPulling="2026-01-28 18:25:31.591144429 +0000 UTC m=+742.417707240" lastFinishedPulling="2026-01-28 18:25:38.113590361 +0000 UTC m=+748.940153182" observedRunningTime="2026-01-28 18:25:38.89310505 +0000 UTC m=+749.719667881" watchObservedRunningTime="2026-01-28 18:25:38.895136238 +0000 UTC m=+749.721699059" Jan 28 18:25:46 crc kubenswrapper[4985]: I0128 18:25:46.110283 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 18:25:54 crc kubenswrapper[4985]: I0128 18:25:54.278599 4985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 18:26:11 crc kubenswrapper[4985]: I0128 18:26:11.186792 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:26:11 crc kubenswrapper[4985]: I0128 18:26:11.187549 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.126451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.128991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.131818 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.137951 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.226935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.328515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.328674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.329060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.329688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.330276 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.355375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.447355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.526980 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.528756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.537169 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.646946 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.647328 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.647372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749536 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.749577 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.750654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.751081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.767578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.879528 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:15 crc kubenswrapper[4985]: I0128 18:26:15.982574 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds"] Jan 28 18:26:15 crc kubenswrapper[4985]: W0128 18:26:15.987468 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda2f76b8f_1fff_44e6_931b_d35852c1ab04.slice/crio-7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597 WatchSource:0}: Error finding container 7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597: Status 404 returned error can't find the container with id 7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597 Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.119476 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95"] Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.150830 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerStarted","Data":"d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def"} Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.153788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerStarted","Data":"894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4"} Jan 28 18:26:16 crc kubenswrapper[4985]: I0128 18:26:16.153836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerStarted","Data":"7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597"} Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.160645 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="c4ac76dea0f68a800666e4d35f648b0040acc4cb01a7cb6535b7cc18059fb1e3" exitCode=0 Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.160745 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"c4ac76dea0f68a800666e4d35f648b0040acc4cb01a7cb6535b7cc18059fb1e3"} Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.174148 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4" exitCode=0 Jan 28 18:26:17 crc kubenswrapper[4985]: I0128 18:26:17.174368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"894e15ec7d9220f942b14acfcad7685a2367b1b0f812f2e821ac326391a596a4"} Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.868854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.870971 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:18 crc kubenswrapper[4985]: I0128 18:26:18.881962 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005649 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.005695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107581 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.107665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.108396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.108609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.152242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"redhat-operators-4dzwh\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.188705 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.190374 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="5e02319af2540360ecf8371ada1fc857a03d8e9891ff4ad09fbe5e3ee5955e14" exitCode=0 Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.190457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"5e02319af2540360ecf8371ada1fc857a03d8e9891ff4ad09fbe5e3ee5955e14"} Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.193982 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="a13b49cc7e5a6c2a85243136ccb7cd9085a298499675dae80e5751a420c59978" exitCode=0 Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.194055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"a13b49cc7e5a6c2a85243136ccb7cd9085a298499675dae80e5751a420c59978"} Jan 28 18:26:19 crc kubenswrapper[4985]: I0128 18:26:19.435390 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:19 crc kubenswrapper[4985]: W0128 18:26:19.443651 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d87bdf0_7212_4ee9_a727_c4c4dfa0a6f9.slice/crio-edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14 WatchSource:0}: Error finding container edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14: Status 404 returned error can't find the container with id edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.202615 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.202696 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.203030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.209239 4985 generic.go:334] "Generic (PLEG): container finished" podID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerID="b0fcaf2aa9fc6cb35b7aa0ba340b5c41ae600a87a1bae320b336b665aa63865d" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.209369 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"b0fcaf2aa9fc6cb35b7aa0ba340b5c41ae600a87a1bae320b336b665aa63865d"} Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.217831 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerID="166206872b0c4d4884e6fc515dd80ff9dfc15537397aa40de4b4a7ad7d6f4489" exitCode=0 Jan 28 18:26:20 crc kubenswrapper[4985]: I0128 18:26:20.217864 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"166206872b0c4d4884e6fc515dd80ff9dfc15537397aa40de4b4a7ad7d6f4489"} Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.226437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9"} Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.601759 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.607997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646779 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.646924 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") pod \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\" (UID: \"a2f76b8f-1fff-44e6-931b-d35852c1ab04\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.648303 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle" (OuterVolumeSpecName: "bundle") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.655019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm" (OuterVolumeSpecName: "kube-api-access-94cgm") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "kube-api-access-94cgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.748683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749065 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749127 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") pod \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\" (UID: \"b691bd15-43f8-4823-917b-7c27b8ca4ba6\") " Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749457 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94cgm\" (UniqueName: \"kubernetes.io/projected/a2f76b8f-1fff-44e6-931b-d35852c1ab04-kube-api-access-94cgm\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.749479 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.750002 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util" (OuterVolumeSpecName: "util") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.754660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle" (OuterVolumeSpecName: "bundle") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.851209 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:21 crc kubenswrapper[4985]: I0128 18:26:21.851285 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b691bd15-43f8-4823-917b-7c27b8ca4ba6-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.236992 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.237013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds" event={"ID":"a2f76b8f-1fff-44e6-931b-d35852c1ab04","Type":"ContainerDied","Data":"7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597"} Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.237511 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa0e2182016394b47444a51b40eb5073bda21f911c0c534cca66600027c5597" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" event={"ID":"b691bd15-43f8-4823-917b-7c27b8ca4ba6","Type":"ContainerDied","Data":"d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def"} Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240175 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.240177 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d14c9322216608ff3fd9b4c5f70c9086a5972c70a87762641033ea553f1b5def" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.660233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5" (OuterVolumeSpecName: "kube-api-access-9w4w5") pod "b691bd15-43f8-4823-917b-7c27b8ca4ba6" (UID: "b691bd15-43f8-4823-917b-7c27b8ca4ba6"). InnerVolumeSpecName "kube-api-access-9w4w5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.661465 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9w4w5\" (UniqueName: \"kubernetes.io/projected/b691bd15-43f8-4823-917b-7c27b8ca4ba6-kube-api-access-9w4w5\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.661695 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util" (OuterVolumeSpecName: "util") pod "a2f76b8f-1fff-44e6-931b-d35852c1ab04" (UID: "a2f76b8f-1fff-44e6-931b-d35852c1ab04"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:22 crc kubenswrapper[4985]: I0128 18:26:22.763238 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a2f76b8f-1fff-44e6-931b-d35852c1ab04-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:23 crc kubenswrapper[4985]: I0128 18:26:23.248846 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9" exitCode=0 Jan 28 18:26:23 crc kubenswrapper[4985]: I0128 18:26:23.248900 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9"} Jan 28 18:26:24 crc kubenswrapper[4985]: I0128 18:26:24.256156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerStarted","Data":"5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98"} Jan 28 18:26:24 crc kubenswrapper[4985]: I0128 18:26:24.279788 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4dzwh" podStartSLOduration=2.818329796 podStartE2EDuration="6.279771901s" podCreationTimestamp="2026-01-28 18:26:18 +0000 UTC" firstStartedPulling="2026-01-28 18:26:20.205065082 +0000 UTC m=+791.031627903" lastFinishedPulling="2026-01-28 18:26:23.666507187 +0000 UTC m=+794.493070008" observedRunningTime="2026-01-28 18:26:24.277689832 +0000 UTC m=+795.104252653" watchObservedRunningTime="2026-01-28 18:26:24.279771901 +0000 UTC m=+795.106334722" Jan 28 18:26:29 crc kubenswrapper[4985]: I0128 18:26:29.189845 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:29 crc kubenswrapper[4985]: I0128 18:26:29.190199 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:30 crc kubenswrapper[4985]: I0128 18:26:30.232123 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" probeResult="failure" output=< Jan 28 18:26:30 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:26:30 crc kubenswrapper[4985]: > Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969219 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969929 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969963 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.969984 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.969992 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="pull" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970008 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970015 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="util" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970026 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970033 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: E0128 18:26:31.970046 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970053 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970214 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b691bd15-43f8-4823-917b-7c27b8ca4ba6" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.970238 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2f76b8f-1fff-44e6-931b-d35852c1ab04" containerName="extract" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.971136 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.974067 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.974181 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975075 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975492 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.975753 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.976090 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-mn6br" Jan 28 18:26:31 crc kubenswrapper[4985]: I0128 18:26:31.995950 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097776 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097823 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097847 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.097867 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.098109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199187 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.199251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.200081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/fc080bc5-4b4f-4405-b458-7450aaf8714b-manager-config\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.206727 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-apiservice-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.211048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-webhook-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.228963 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc080bc5-4b4f-4405-b458-7450aaf8714b-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.232060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstl9\" (UniqueName: \"kubernetes.io/projected/fc080bc5-4b4f-4405-b458-7450aaf8714b-kube-api-access-zstl9\") pod \"loki-operator-controller-manager-85fc96dbd6-9qljj\" (UID: \"fc080bc5-4b4f-4405-b458-7450aaf8714b\") " pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.287869 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:32 crc kubenswrapper[4985]: I0128 18:26:32.730562 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj"] Jan 28 18:26:32 crc kubenswrapper[4985]: W0128 18:26:32.736946 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc080bc5_4b4f_4405_b458_7450aaf8714b.slice/crio-b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c WatchSource:0}: Error finding container b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c: Status 404 returned error can't find the container with id b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c Jan 28 18:26:33 crc kubenswrapper[4985]: I0128 18:26:33.331103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"b4d85b1e81cf2e318d4242fc41c67acc871047936680b26a3c26a77ef6d9db0c"} Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.407826 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.408880 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.410677 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"kube-root-ca.crt" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.410891 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"openshift-service-ca.crt" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.411201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"cluster-logging-operator-dockercfg-lmv4l" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.425338 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.564438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.666510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.692361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6wwk\" (UniqueName: \"kubernetes.io/projected/4db97b28-803f-4b66-9322-f210440517ff-kube-api-access-j6wwk\") pod \"cluster-logging-operator-79cf69ddc8-d28w5\" (UID: \"4db97b28-803f-4b66-9322-f210440517ff\") " pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:36 crc kubenswrapper[4985]: I0128 18:26:36.730938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.065612 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5"] Jan 28 18:26:38 crc kubenswrapper[4985]: W0128 18:26:38.073432 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4db97b28_803f_4b66_9322_f210440517ff.slice/crio-f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6 WatchSource:0}: Error finding container f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6: Status 404 returned error can't find the container with id f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6 Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.364274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" event={"ID":"4db97b28-803f-4b66-9322-f210440517ff","Type":"ContainerStarted","Data":"f5e33ac8d78cd1e86fb00895970e81656e51f1b0ba4ad4d18bdcd27a430d89b6"} Jan 28 18:26:38 crc kubenswrapper[4985]: I0128 18:26:38.365903 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f"} Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.248177 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.298439 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.672228 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.673789 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.682821 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.827459 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929372 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.929911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:39 crc kubenswrapper[4985]: I0128 18:26:39.951395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"certified-operators-92xk4\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:40 crc kubenswrapper[4985]: I0128 18:26:40.012167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:26:40 crc kubenswrapper[4985]: I0128 18:26:40.651240 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:26:40 crc kubenswrapper[4985]: W0128 18:26:40.673173 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod869b5731_3bfc_4db2_af7e_a065f8fbcf0f.slice/crio-488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52 WatchSource:0}: Error finding container 488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52: Status 404 returned error can't find the container with id 488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52 Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.186357 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.186807 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414759 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a" exitCode=0 Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a"} Jan 28 18:26:41 crc kubenswrapper[4985]: I0128 18:26:41.414838 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerStarted","Data":"488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52"} Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.256689 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.257166 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" containerID="cri-o://5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" gracePeriod=2 Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.433116 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" exitCode=0 Jan 28 18:26:43 crc kubenswrapper[4985]: I0128 18:26:43.433201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98"} Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.190219 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191163 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191536 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:26:49 crc kubenswrapper[4985]: E0128 18:26:49.191571 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-4dzwh" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.678799 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.777777 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.778191 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.778393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") pod \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\" (UID: \"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9\") " Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.780017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities" (OuterVolumeSpecName: "utilities") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.796661 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44" (OuterVolumeSpecName: "kube-api-access-v4b44") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "kube-api-access-v4b44". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.880433 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.880651 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4b44\" (UniqueName: \"kubernetes.io/projected/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-kube-api-access-v4b44\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.930075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" (UID: "6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:26:49 crc kubenswrapper[4985]: I0128 18:26:49.982588 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.480579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"b2537536e480df8807fbf335c3a21af976e198c4fcbd7f19aee7615203234ab0"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.482041 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.483923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484684 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4dzwh" event={"ID":"6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9","Type":"ContainerDied","Data":"edef8aac6c8d1e61396f10082b442134209abcac77fca9ab8eefd215fc05cb14"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484779 4985 scope.go:117] "RemoveContainer" containerID="5c3f23f40912c5b12ac449c445c4de2a5529d2912b98d21ffe77f643d4b61b98" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.484716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4dzwh" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.486492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" event={"ID":"4db97b28-803f-4b66-9322-f210440517ff","Type":"ContainerStarted","Data":"ac84eec0161e8817b9ff325278032ec77effc79279e7d70fe1c3a60cd6c6aa23"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.488701 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615" exitCode=0 Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.488831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615"} Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.500686 4985 scope.go:117] "RemoveContainer" containerID="c0101ba127274bf28c8cc50d2966b9e93977f192a37fbe59aa75129ed11ee8f9" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.516916 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podStartSLOduration=2.495678618 podStartE2EDuration="19.516891185s" podCreationTimestamp="2026-01-28 18:26:31 +0000 UTC" firstStartedPulling="2026-01-28 18:26:32.740422625 +0000 UTC m=+803.566985486" lastFinishedPulling="2026-01-28 18:26:49.761635232 +0000 UTC m=+820.588198053" observedRunningTime="2026-01-28 18:26:50.507992953 +0000 UTC m=+821.334555774" watchObservedRunningTime="2026-01-28 18:26:50.516891185 +0000 UTC m=+821.343454066" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.534217 4985 scope.go:117] "RemoveContainer" containerID="e8d028e6fa502a4926094f90447dd5b0dfaa5b2776af57350b61ce63ec91efa8" Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.563164 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.574731 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4dzwh"] Jan 28 18:26:50 crc kubenswrapper[4985]: I0128 18:26:50.612631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/cluster-logging-operator-79cf69ddc8-d28w5" podStartSLOduration=2.9341536870000002 podStartE2EDuration="14.612610467s" podCreationTimestamp="2026-01-28 18:26:36 +0000 UTC" firstStartedPulling="2026-01-28 18:26:38.0760221 +0000 UTC m=+808.902584921" lastFinishedPulling="2026-01-28 18:26:49.75447888 +0000 UTC m=+820.581041701" observedRunningTime="2026-01-28 18:26:50.606743111 +0000 UTC m=+821.433305942" watchObservedRunningTime="2026-01-28 18:26:50.612610467 +0000 UTC m=+821.439173288" Jan 28 18:26:51 crc kubenswrapper[4985]: I0128 18:26:51.272687 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" path="/var/lib/kubelet/pods/6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9/volumes" Jan 28 18:26:52 crc kubenswrapper[4985]: I0128 18:26:52.506682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerStarted","Data":"d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96"} Jan 28 18:26:52 crc kubenswrapper[4985]: I0128 18:26:52.536661 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-92xk4" podStartSLOduration=3.272692851 podStartE2EDuration="13.536628356s" podCreationTimestamp="2026-01-28 18:26:39 +0000 UTC" firstStartedPulling="2026-01-28 18:26:41.419973537 +0000 UTC m=+812.246536368" lastFinishedPulling="2026-01-28 18:26:51.683909052 +0000 UTC m=+822.510471873" observedRunningTime="2026-01-28 18:26:52.532746707 +0000 UTC m=+823.359309528" watchObservedRunningTime="2026-01-28 18:26:52.536628356 +0000 UTC m=+823.363191177" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.490932 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491829 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-utilities" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491859 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-utilities" Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491884 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-content" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491892 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="extract-content" Jan 28 18:26:55 crc kubenswrapper[4985]: E0128 18:26:55.491928 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.491936 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.496089 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d87bdf0-7212-4ee9-a727-c4c4dfa0a6f9" containerName="registry-server" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.497441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.503366 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.503844 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.506365 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.659473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.659554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.761163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.761233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.764161 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.764190 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0dfdd8f7ea2c81834327a58594b515cf36ff0ea5bd50ef20152bed47b4a10073/globalmount\"" pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.792458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f1781877-5af0-43d7-931c-0b572cde5552\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f1781877-5af0-43d7-931c-0b572cde5552\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.801185 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk4kd\" (UniqueName: \"kubernetes.io/projected/8fa05e4c-a197-4caa-baff-285c1b90274b-kube-api-access-nk4kd\") pod \"minio\" (UID: \"8fa05e4c-a197-4caa-baff-285c1b90274b\") " pod="minio-dev/minio" Jan 28 18:26:55 crc kubenswrapper[4985]: I0128 18:26:55.837466 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Jan 28 18:26:56 crc kubenswrapper[4985]: I0128 18:26:56.295424 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Jan 28 18:26:56 crc kubenswrapper[4985]: I0128 18:26:56.536933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8fa05e4c-a197-4caa-baff-285c1b90274b","Type":"ContainerStarted","Data":"17d7018e282ed9af8dfe1fbe0dabcb857f595e3642584d4d21030b809487c064"} Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.015463 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.016072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.098012 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.605204 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:00 crc kubenswrapper[4985]: I0128 18:27:00.648452 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:02 crc kubenswrapper[4985]: I0128 18:27:02.596794 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-92xk4" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" containerID="cri-o://d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" gracePeriod=2 Jan 28 18:27:05 crc kubenswrapper[4985]: I0128 18:27:05.628245 4985 generic.go:334] "Generic (PLEG): container finished" podID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" exitCode=0 Jan 28 18:27:05 crc kubenswrapper[4985]: I0128 18:27:05.628274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96"} Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.012984 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.014142 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.016210 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.016304 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-92xk4" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.643677 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/minio/minio:latest" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.644133 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 18:27:10 crc kubenswrapper[4985]: container &Container{Name:minio,Image:quay.io/minio/minio:latest,Command:[/bin/bash -c mkdir -p /data/loki && \ Jan 28 18:27:10 crc kubenswrapper[4985]: minio server /data Jan 28 18:27:10 crc kubenswrapper[4985]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:MINIO_ACCESS_KEY,Value:minio,ValueFrom:nil,},EnvVar{Name:MINIO_SECRET_KEY,Value:minio123,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:storage,ReadOnly:false,MountPath:/data,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nk4kd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod minio_minio-dev(8fa05e4c-a197-4caa-baff-285c1b90274b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Jan 28 18:27:10 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.645280 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minio\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="minio-dev/minio" podUID="8fa05e4c-a197-4caa-baff-285c1b90274b" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.678810 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-92xk4" event={"ID":"869b5731-3bfc-4db2-af7e-a065f8fbcf0f","Type":"ContainerDied","Data":"488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52"} Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.678860 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="488334f3f6fbb34a19e19115e46d9ed76de4efd03f74ca396d15a7e5d31b3c52" Jan 28 18:27:10 crc kubenswrapper[4985]: E0128 18:27:10.680077 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"minio\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/minio/minio:latest\\\"\"" pod="minio-dev/minio" podUID="8fa05e4c-a197-4caa-baff-285c1b90274b" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.681325 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.808845 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.808962 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.809044 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") pod \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\" (UID: \"869b5731-3bfc-4db2-af7e-a065f8fbcf0f\") " Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.810235 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities" (OuterVolumeSpecName: "utilities") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.816630 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn" (OuterVolumeSpecName: "kube-api-access-glgsn") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "kube-api-access-glgsn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.880987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "869b5731-3bfc-4db2-af7e-a065f8fbcf0f" (UID: "869b5731-3bfc-4db2-af7e-a065f8fbcf0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911281 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-glgsn\" (UniqueName: \"kubernetes.io/projected/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-kube-api-access-glgsn\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911317 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:10 crc kubenswrapper[4985]: I0128 18:27:10.911333 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/869b5731-3bfc-4db2-af7e-a065f8fbcf0f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.185995 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186100 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186163 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.186957 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.187040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" gracePeriod=600 Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.694357 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" exitCode=0 Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.694966 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-92xk4" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.695845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2"} Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.695890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.696090 4985 scope.go:117] "RemoveContainer" containerID="7f63b5a5d82d462357c3a92eda8a9e8dafecb82cb35862cc75804b4a50b4c56e" Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.724196 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:11 crc kubenswrapper[4985]: I0128 18:27:11.731559 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-92xk4"] Jan 28 18:27:13 crc kubenswrapper[4985]: I0128 18:27:13.277121 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" path="/var/lib/kubelet/pods/869b5731-3bfc-4db2-af7e-a065f8fbcf0f/volumes" Jan 28 18:27:26 crc kubenswrapper[4985]: I0128 18:27:26.821470 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"8fa05e4c-a197-4caa-baff-285c1b90274b","Type":"ContainerStarted","Data":"247208a62a9fd9696af76842086b6539ee86ffefaec40a46abe8dc43f1f10530"} Jan 28 18:27:26 crc kubenswrapper[4985]: I0128 18:27:26.849949 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.46905453 podStartE2EDuration="33.849922768s" podCreationTimestamp="2026-01-28 18:26:53 +0000 UTC" firstStartedPulling="2026-01-28 18:26:56.306765246 +0000 UTC m=+827.133328067" lastFinishedPulling="2026-01-28 18:27:25.687633444 +0000 UTC m=+856.514196305" observedRunningTime="2026-01-28 18:27:26.840568004 +0000 UTC m=+857.667130865" watchObservedRunningTime="2026-01-28 18:27:26.849922768 +0000 UTC m=+857.676485629" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.327627 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328466 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-utilities" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328480 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-utilities" Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328500 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-content" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328507 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="extract-content" Jan 28 18:27:33 crc kubenswrapper[4985]: E0128 18:27:33.328518 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328524 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.328630 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="869b5731-3bfc-4db2-af7e-a065f8fbcf0f" containerName="registry-server" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.329080 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333505 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-dockercfg-stzxf" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333822 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-ca-bundle" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.333955 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-config" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.334063 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.334209 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-distributor-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.340069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.470184 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.471632 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.478504 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.478886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-querier-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.479053 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-s3" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.493281 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.532830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.533015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.533228 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.556903 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.557712 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.559225 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.562308 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-query-frontend-grpc" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.569685 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634389 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634811 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634833 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634882 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634922 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.634946 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.635914 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-ca-bundle\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.636390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-config\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.654351 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-http\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-http\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.654366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-logging-loki-distributor-grpc\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.665994 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.676675 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.686524 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.686894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-dockercfg-2pqzh" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687092 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway-ca-bundle" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687391 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-client-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687663 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"logging-loki-gateway" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.687857 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-gateway-http" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.703486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djgj\" (UniqueName: \"kubernetes.io/projected/effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb-kube-api-access-5djgj\") pod \"logging-loki-distributor-5f678c8dd6-2755m\" (UID: \"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb\") " pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.730357 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736442 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.736506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.737978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-ca-bundle\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.744684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.744863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-grpc\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-grpc\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.746078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.750017 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.751727 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-config\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.756121 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-s3\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.761875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-querier-http\" (UniqueName: \"kubernetes.io/secret/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-logging-loki-querier-http\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.766089 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqsww\" (UniqueName: \"kubernetes.io/projected/21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7-kube-api-access-jqsww\") pod \"logging-loki-querier-76788598db-dkn9m\" (UID: \"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7\") " pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.793289 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838048 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838121 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838155 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838185 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838364 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838453 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838586 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838668 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838696 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838716 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838807 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.838845 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.839810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-ca-bundle\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.841881 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-config\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.842092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-http\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.842616 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-logging-loki-query-frontend-grpc\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.860464 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8gzq\" (UniqueName: \"kubernetes.io/projected/5c56d4fe-62c7-47ef-9a0f-607d899d19b8-kube-api-access-f8gzq\") pod \"logging-loki-query-frontend-69d9546745-pcd6x\" (UID: \"5c56d4fe-62c7-47ef-9a0f-607d899d19b8\") " pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.877443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940710 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940753 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940825 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940848 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940918 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940965 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.940992 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941012 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941031 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.941055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.942380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.944782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.944903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-rbac\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.946474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tenants\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.946662 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-tls-secret\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ae6864ac-d6e2-4d85-aa84-361f51b944eb-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.947982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-rbac\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958360 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-ca-bundle\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.958793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-lokistack-gateway\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-logging-loki-gateway-client-http\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961724 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqps9\" (UniqueName: \"kubernetes.io/projected/ae6864ac-d6e2-4d85-aa84-361f51b944eb-kube-api-access-mqps9\") pod \"logging-loki-gateway-76696895d9-g5tqr\" (UID: \"ae6864ac-d6e2-4d85-aa84-361f51b944eb\") " pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.961883 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.963854 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tls-secret\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.966454 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5zhn\" (UniqueName: \"kubernetes.io/projected/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-kube-api-access-n5zhn\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:33 crc kubenswrapper[4985]: I0128 18:27:33.966459 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b-tenants\") pod \"logging-loki-gateway-76696895d9-c6d96\" (UID: \"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b\") " pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.035492 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.105714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.230942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-querier-76788598db-dkn9m"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.359785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.363598 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5c56d4fe_62c7_47ef_9a0f_607d899d19b8.slice/crio-7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e WatchSource:0}: Error finding container 7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e: Status 404 returned error can't find the container with id 7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.467376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-distributor-5f678c8dd6-2755m"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.468347 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeffc2fb2_2eb7_4ea0_abf1_0d43bde4adeb.slice/crio-4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d WatchSource:0}: Error finding container 4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d: Status 404 returned error can't find the container with id 4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.488196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.489300 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.492866 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.493796 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-ingester-http" Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.499530 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae6864ac_d6e2_4d85_aa84_361f51b944eb.slice/crio-98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a WatchSource:0}: Error finding container 98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a: Status 404 returned error can't find the container with id 98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.501536 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-g5tqr"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.511082 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.537072 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.537922 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.539719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-http" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.542077 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-compactor-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.549439 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.579585 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-gateway-76696895d9-c6d96"] Jan 28 18:27:34 crc kubenswrapper[4985]: W0128 18:27:34.582852 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod02e0988e_bb4d_4c63_a4aa_3f1432a1ee7b.slice/crio-ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516 WatchSource:0}: Error finding container ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516: Status 404 returned error can't find the container with id ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516 Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.615672 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.616935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.619410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-http" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.619609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"logging-loki-index-gateway-grpc" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.621968 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657841 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.657912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658070 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658239 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658305 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658337 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658461 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.658477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759865 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759930 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.759964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760084 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760230 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760501 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760546 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760605 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760733 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760758 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760780 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.760833 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.761715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-config\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.762195 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-ca-bundle\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.762797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ca-bundle\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.764575 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac72f54d-936d-4c98-9f91-918f7a05b5d1-config\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.766212 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-s3\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.766231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-s3\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-grpc\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767378 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767416 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9d34aebcddf21e72b6271ca9fd89e77f2902f6b93aa7b3d4cec0d014dfe6e8f6/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767520 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767583 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/0a50106bc928bdbed945f7ef72ab597a68c4c7a6f33ecb55fb4d0f537b7d613d/globalmount\"" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.767736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-http\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-http\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.768018 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.768054 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f86c0bdb72dc4e631fa3430a68d817f45a059b0d41cd015f7b9c23b2d7dc03d4/globalmount\"" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.770026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ingester-http\" (UniqueName: \"kubernetes.io/secret/e322915e-933c-4de4-98dd-ef047ee5b056-logging-loki-ingester-http\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.771831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/ac72f54d-936d-4c98-9f91-918f7a05b5d1-logging-loki-compactor-grpc\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.778368 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bsxk6\" (UniqueName: \"kubernetes.io/projected/ac72f54d-936d-4c98-9f91-918f7a05b5d1-kube-api-access-bsxk6\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.784120 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtnn5\" (UniqueName: \"kubernetes.io/projected/e322915e-933c-4de4-98dd-ef047ee5b056-kube-api-access-wtnn5\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.794646 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-fca63622-5aca-4efb-a7fe-bb443a1c1f59\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.795234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7d3bb0be-7a81-454c-ac38-c6ad37f0ea95\") pod \"logging-loki-compactor-0\" (UID: \"ac72f54d-936d-4c98-9f91-918f7a05b5d1\") " pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.797876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e5484364-652f-4506-b78b-405e87866424\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e5484364-652f-4506-b78b-405e87866424\") pod \"logging-loki-ingester-0\" (UID: \"e322915e-933c-4de4-98dd-ef047ee5b056\") " pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.816782 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.858939 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862684 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.862738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863152 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863588 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.863639 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.864591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-config\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.865011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-ca-bundle\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-http\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866767 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866828 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/da5ff38c7787397afb3cc363a26e7e8fa9ae822407f71e523b9148e301f40a94/globalmount\"" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.866861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-index-gateway-grpc\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.867052 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-loki-s3\" (UniqueName: \"kubernetes.io/secret/664a7afe-25ae-45f8-81bd-9a9c59c431cd-logging-loki-s3\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.880705 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2dwn\" (UniqueName: \"kubernetes.io/projected/664a7afe-25ae-45f8-81bd-9a9c59c431cd-kube-api-access-w2dwn\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.882712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" event={"ID":"5c56d4fe-62c7-47ef-9a0f-607d899d19b8","Type":"ContainerStarted","Data":"7451ce76c0eeac02a853d076996cdb46adc418e5efa56e5641b4213b58bbfa0e"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.885188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"98c517ac65262433c9fc503fa4e0561a169da5ada6742db210ebff64d028673a"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.886221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"ce25d9d30a9830dda3a3182457002b75267272cafe29fa8789893581aa5cb516"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.901114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" event={"ID":"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb","Type":"ContainerStarted","Data":"4b46b6dbff4cc34d5bbe20467db5806c99ab61783cfb50c150fea3d55b94fd7d"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.918656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" event={"ID":"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7","Type":"ContainerStarted","Data":"2b9d1e6ddcc3d486b25b59b9b3b27d1121412cfc510cc740b881f81c041aed0d"} Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.923857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ed1092d2-65bc-47b0-81f9-72627d9feec9\") pod \"logging-loki-index-gateway-0\" (UID: \"664a7afe-25ae-45f8-81bd-9a9c59c431cd\") " pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:34 crc kubenswrapper[4985]: I0128 18:27:34.931366 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.222081 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-ingester-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.226279 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode322915e_933c_4de4_98dd_ef047ee5b056.slice/crio-b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f WatchSource:0}: Error finding container b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f: Status 404 returned error can't find the container with id b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.293098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-compactor-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.294619 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podac72f54d_936d_4c98_9f91_918f7a05b5d1.slice/crio-05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f WatchSource:0}: Error finding container 05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f: Status 404 returned error can't find the container with id 05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.347693 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/logging-loki-index-gateway-0"] Jan 28 18:27:35 crc kubenswrapper[4985]: W0128 18:27:35.353785 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod664a7afe_25ae_45f8_81bd_9a9c59c431cd.slice/crio-882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135 WatchSource:0}: Error finding container 882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135: Status 404 returned error can't find the container with id 882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135 Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.927999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"664a7afe-25ae-45f8-81bd-9a9c59c431cd","Type":"ContainerStarted","Data":"882c8ccf382662e9661161384c9f7d44ee73628918020cd4930a4c8f50388135"} Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.929992 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e322915e-933c-4de4-98dd-ef047ee5b056","Type":"ContainerStarted","Data":"b3aaf44848f1f5e8a2acf0691eac5c29fa36f7435b88db2f351a8b3d8a61251f"} Jan 28 18:27:35 crc kubenswrapper[4985]: I0128 18:27:35.930910 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ac72f54d-936d-4c98-9f91-918f7a05b5d1","Type":"ContainerStarted","Data":"05f29297abaf23f20c9b6aa2c33cf8d8235321abd64bd3311ec1f63133a5e51f"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.954946 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" event={"ID":"5c56d4fe-62c7-47ef-9a0f-607d899d19b8","Type":"ContainerStarted","Data":"e5d10ad440fd48d587173ef40bb25ee2c50f17e8dfd6388913a8ace6022d8276"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.955648 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.957749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-index-gateway-0" event={"ID":"664a7afe-25ae-45f8-81bd-9a9c59c431cd","Type":"ContainerStarted","Data":"979f7178decf96b036aeaeefc740956920aa4c3e7dea476507625e079d4bf654"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.957892 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.960346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-ingester-0" event={"ID":"e322915e-933c-4de4-98dd-ef047ee5b056","Type":"ContainerStarted","Data":"7b85fb0b4324d5d5159bd3e31814a9b315085473da50651a26099491a3cad1c7"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.960470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.961853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"bb98b3a9ae24440a684bdc98d1f296c6416de56f94cf56c8e4ba101fe4b010ce"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.963515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"dfb996e7fc5b44eebaffe384562e7c0762443e351a1b60cec569371d59fdefe2"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.965854 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-compactor-0" event={"ID":"ac72f54d-936d-4c98-9f91-918f7a05b5d1","Type":"ContainerStarted","Data":"dfb3a36bbffe1a384711bb7726bff8c8c9f17845fb448441da4e2ac14e7a1ae9"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.965928 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.967873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" event={"ID":"21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7","Type":"ContainerStarted","Data":"1bc36136fdf9a9f030bacd5411ac681502b0ed109dc47735176020a3150e8b66"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.968088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.969116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" event={"ID":"effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb","Type":"ContainerStarted","Data":"7ebdb1482b87e174d14ffaf25af81b75da2729b12bdcc6e6952a1b79ff2f49d4"} Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.969353 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.973126 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podStartSLOduration=2.23186055 podStartE2EDuration="5.973110733s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.365685306 +0000 UTC m=+865.192248137" lastFinishedPulling="2026-01-28 18:27:38.106935499 +0000 UTC m=+868.933498320" observedRunningTime="2026-01-28 18:27:38.971852578 +0000 UTC m=+869.798415419" watchObservedRunningTime="2026-01-28 18:27:38.973110733 +0000 UTC m=+869.799673544" Jan 28 18:27:38 crc kubenswrapper[4985]: I0128 18:27:38.995267 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podStartSLOduration=2.273232917 podStartE2EDuration="5.995241218s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.470881395 +0000 UTC m=+865.297444216" lastFinishedPulling="2026-01-28 18:27:38.192889656 +0000 UTC m=+869.019452517" observedRunningTime="2026-01-28 18:27:38.993999463 +0000 UTC m=+869.820562274" watchObservedRunningTime="2026-01-28 18:27:38.995241218 +0000 UTC m=+869.821804039" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.036539 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-ingester-0" podStartSLOduration=3.078744418 podStartE2EDuration="6.036522793s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.228188525 +0000 UTC m=+866.054751336" lastFinishedPulling="2026-01-28 18:27:38.18596689 +0000 UTC m=+869.012529711" observedRunningTime="2026-01-28 18:27:39.019843412 +0000 UTC m=+869.846406253" watchObservedRunningTime="2026-01-28 18:27:39.036522793 +0000 UTC m=+869.863085604" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.037335 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podStartSLOduration=2.072555832 podStartE2EDuration="6.037329606s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.243865986 +0000 UTC m=+865.070428807" lastFinishedPulling="2026-01-28 18:27:38.20863976 +0000 UTC m=+869.035202581" observedRunningTime="2026-01-28 18:27:39.031848191 +0000 UTC m=+869.858411012" watchObservedRunningTime="2026-01-28 18:27:39.037329606 +0000 UTC m=+869.863892427" Jan 28 18:27:39 crc kubenswrapper[4985]: I0128 18:27:39.055446 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-compactor-0" podStartSLOduration=3.160743493 podStartE2EDuration="6.055428077s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.295908907 +0000 UTC m=+866.122471728" lastFinishedPulling="2026-01-28 18:27:38.190593491 +0000 UTC m=+869.017156312" observedRunningTime="2026-01-28 18:27:39.049430998 +0000 UTC m=+869.875993829" watchObservedRunningTime="2026-01-28 18:27:39.055428077 +0000 UTC m=+869.881990898" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.292096 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-index-gateway-0" podStartSLOduration=5.467065757 podStartE2EDuration="8.292050193s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:35.359521583 +0000 UTC m=+866.186084404" lastFinishedPulling="2026-01-28 18:27:38.184506019 +0000 UTC m=+869.011068840" observedRunningTime="2026-01-28 18:27:39.069390111 +0000 UTC m=+869.895952952" watchObservedRunningTime="2026-01-28 18:27:41.292050193 +0000 UTC m=+872.118613014" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" event={"ID":"ae6864ac-d6e2-4d85-aa84-361f51b944eb","Type":"ContainerStarted","Data":"49b1b47d70ef49d5d3c357e0e4c0260742a1e71fbda027d7a0c7b08b2240878f"} Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997607 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:41 crc kubenswrapper[4985]: I0128 18:27:41.997639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000451 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" event={"ID":"02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b","Type":"ContainerStarted","Data":"bd57bd2da85666a901250eb2b260ff39ea755f7279264c3a5fa429402f673f0e"} Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.000825 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.014692 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.015667 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.015863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.016925 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.027145 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podStartSLOduration=2.170748443 podStartE2EDuration="9.027122645s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.505215524 +0000 UTC m=+865.331778345" lastFinishedPulling="2026-01-28 18:27:41.361589716 +0000 UTC m=+872.188152547" observedRunningTime="2026-01-28 18:27:42.026504587 +0000 UTC m=+872.853067438" watchObservedRunningTime="2026-01-28 18:27:42.027122645 +0000 UTC m=+872.853685466" Jan 28 18:27:42 crc kubenswrapper[4985]: I0128 18:27:42.079791 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podStartSLOduration=2.283214958 podStartE2EDuration="9.079763911s" podCreationTimestamp="2026-01-28 18:27:33 +0000 UTC" firstStartedPulling="2026-01-28 18:27:34.585353837 +0000 UTC m=+865.411916658" lastFinishedPulling="2026-01-28 18:27:41.38190278 +0000 UTC m=+872.208465611" observedRunningTime="2026-01-28 18:27:42.066326252 +0000 UTC m=+872.892889083" watchObservedRunningTime="2026-01-28 18:27:42.079763911 +0000 UTC m=+872.906326742" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.803446 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.884082 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 18:27:53 crc kubenswrapper[4985]: I0128 18:27:53.971842 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.822665 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.822944 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.865293 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-compactor-0" Jan 28 18:27:54 crc kubenswrapper[4985]: I0128 18:27:54.938076 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-index-gateway-0" Jan 28 18:28:04 crc kubenswrapper[4985]: I0128 18:28:04.827651 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: this instance owns no tokens Jan 28 18:28:04 crc kubenswrapper[4985]: I0128 18:28:04.828390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:14 crc kubenswrapper[4985]: I0128 18:28:14.825728 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 28 18:28:14 crc kubenswrapper[4985]: I0128 18:28:14.826322 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:24 crc kubenswrapper[4985]: I0128 18:28:24.823475 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=Ingester not ready: waiting for 15s after being ready Jan 28 18:28:24 crc kubenswrapper[4985]: I0128 18:28:24.823990 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 18:28:34 crc kubenswrapper[4985]: I0128 18:28:34.823987 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-ingester-0" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.589954 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.591777 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.596041 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.596901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.597322 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.597878 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-wm86f" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.598156 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.612745 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.671641 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.678077 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.678821 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[collector-syslog-receiver collector-token config config-openshift-service-cacrt datadir entrypoint kube-api-access-nk5b9 metrics sa-token tmp trusted-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-logging/collector-pg6pj" podUID="3783738c-5aae-44e2-8406-47ac21968731" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759619 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759736 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.759936 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862184 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862221 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862267 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.862312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862386 4985 secret.go:188] Couldn't get secret openshift-logging/collector-metrics: secret "collector-metrics" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862424 4985 secret.go:188] Couldn't get secret openshift-logging/collector-syslog-receiver: secret "collector-syslog-receiver" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862440 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics podName:3783738c-5aae-44e2-8406-47ac21968731 nodeName:}" failed. No retries permitted until 2026-01-28 18:28:52.362421271 +0000 UTC m=+943.188984092 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics" (UniqueName: "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics") pod "collector-pg6pj" (UID: "3783738c-5aae-44e2-8406-47ac21968731") : secret "collector-metrics" not found Jan 28 18:28:51 crc kubenswrapper[4985]: E0128 18:28:51.862470 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver podName:3783738c-5aae-44e2-8406-47ac21968731 nodeName:}" failed. No retries permitted until 2026-01-28 18:28:52.362459122 +0000 UTC m=+943.189021943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "collector-syslog-receiver" (UniqueName: "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver") pod "collector-pg6pj" (UID: "3783738c-5aae-44e2-8406-47ac21968731") : secret "collector-syslog-receiver" not found Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863809 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.863998 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.864955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.870045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.879761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.881498 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:51 crc kubenswrapper[4985]: I0128 18:28:51.888987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.370612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.370970 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.382326 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.382411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"collector-pg6pj\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.588148 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.598136 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675699 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675863 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.675937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676167 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676220 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") pod \"3783738c-5aae-44e2-8406-47ac21968731\" (UID: \"3783738c-5aae-44e2-8406-47ac21968731\") " Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676443 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir" (OuterVolumeSpecName: "datadir") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "datadir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.676898 4985 reconciler_common.go:293] "Volume detached for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/3783738c-5aae-44e2-8406-47ac21968731-datadir\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677085 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config" (OuterVolumeSpecName: "config") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt" (OuterVolumeSpecName: "config-openshift-service-cacrt") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "config-openshift-service-cacrt". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.677578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint" (OuterVolumeSpecName: "entrypoint") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "entrypoint". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.679998 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token" (OuterVolumeSpecName: "sa-token") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.680196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp" (OuterVolumeSpecName: "tmp") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.680409 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token" (OuterVolumeSpecName: "collector-token") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "collector-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.681874 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics" (OuterVolumeSpecName: "metrics") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.682475 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver" (OuterVolumeSpecName: "collector-syslog-receiver") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "collector-syslog-receiver". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.683105 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9" (OuterVolumeSpecName: "kube-api-access-nk5b9") pod "3783738c-5aae-44e2-8406-47ac21968731" (UID: "3783738c-5aae-44e2-8406-47ac21968731"). InnerVolumeSpecName "kube-api-access-nk5b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.777969 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778010 4985 reconciler_common.go:293] "Volume detached for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778024 4985 reconciler_common.go:293] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/3783738c-5aae-44e2-8406-47ac21968731-tmp\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778038 4985 reconciler_common.go:293] "Volume detached for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-config-openshift-service-cacrt\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778051 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778064 4985 reconciler_common.go:293] "Volume detached for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/3783738c-5aae-44e2-8406-47ac21968731-entrypoint\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778078 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nk5b9\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-kube-api-access-nk5b9\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778090 4985 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778100 4985 reconciler_common.go:293] "Volume detached for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/3783738c-5aae-44e2-8406-47ac21968731-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:52 crc kubenswrapper[4985]: I0128 18:28:52.778112 4985 reconciler_common.go:293] "Volume detached for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/3783738c-5aae-44e2-8406-47ac21968731-collector-syslog-receiver\") on node \"crc\" DevicePath \"\"" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.599566 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-pg6pj" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.657310 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.664868 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-logging/collector-pg6pj"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.675823 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.676805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.685856 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-metrics" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.686595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-dockercfg-wm86f" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.686882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-config" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.687467 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-syslog-receiver" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.687705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-logging"/"collector-token" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.691061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.693835 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-logging"/"collector-trustbundle" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796630 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.796687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797613 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797873 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.797921 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798060 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.798160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899926 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899952 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.899978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900039 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.900263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"datadir\" (UniqueName: \"kubernetes.io/host-path/be7250ed-2e5a-403a-abfa-f1855e86ae44-datadir\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-openshift-service-cacrt\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config-openshift-service-cacrt\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901512 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"entrypoint\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-entrypoint\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.901609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-config\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.902172 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/be7250ed-2e5a-403a-abfa-f1855e86ae44-trusted-ca\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.903636 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-syslog-receiver\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-syslog-receiver\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.915823 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/be7250ed-2e5a-403a-abfa-f1855e86ae44-tmp\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.915977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collector-token\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-collector-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.916175 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/secret/be7250ed-2e5a-403a-abfa-f1855e86ae44-metrics\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.924419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sa-token\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-sa-token\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:53 crc kubenswrapper[4985]: I0128 18:28:53.924811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9j2\" (UniqueName: \"kubernetes.io/projected/be7250ed-2e5a-403a-abfa-f1855e86ae44-kube-api-access-8t9j2\") pod \"collector-gthjs\" (UID: \"be7250ed-2e5a-403a-abfa-f1855e86ae44\") " pod="openshift-logging/collector-gthjs" Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.017339 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-logging/collector-gthjs" Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.477031 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-logging/collector-gthjs"] Jan 28 18:28:54 crc kubenswrapper[4985]: I0128 18:28:54.610191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-gthjs" event={"ID":"be7250ed-2e5a-403a-abfa-f1855e86ae44","Type":"ContainerStarted","Data":"00ae2f783614c06b7da308c2ab3a5a997cb9e8208f790c3fc0dbe87b680aba72"} Jan 28 18:28:55 crc kubenswrapper[4985]: I0128 18:28:55.281326 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3783738c-5aae-44e2-8406-47ac21968731" path="/var/lib/kubelet/pods/3783738c-5aae-44e2-8406-47ac21968731/volumes" Jan 28 18:29:04 crc kubenswrapper[4985]: I0128 18:29:04.693234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-logging/collector-gthjs" event={"ID":"be7250ed-2e5a-403a-abfa-f1855e86ae44","Type":"ContainerStarted","Data":"5bacc122dfbc0f1572079c451f306713df7e0fed758858331828ed8721584186"} Jan 28 18:29:04 crc kubenswrapper[4985]: I0128 18:29:04.720631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-logging/collector-gthjs" podStartSLOduration=2.114902166 podStartE2EDuration="11.720608325s" podCreationTimestamp="2026-01-28 18:28:53 +0000 UTC" firstStartedPulling="2026-01-28 18:28:54.48704342 +0000 UTC m=+945.313606281" lastFinishedPulling="2026-01-28 18:29:04.092749629 +0000 UTC m=+954.919312440" observedRunningTime="2026-01-28 18:29:04.716742606 +0000 UTC m=+955.543305427" watchObservedRunningTime="2026-01-28 18:29:04.720608325 +0000 UTC m=+955.547171156" Jan 28 18:29:11 crc kubenswrapper[4985]: I0128 18:29:11.185681 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:29:11 crc kubenswrapper[4985]: I0128 18:29:11.186230 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.346077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.347936 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.349812 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.360243 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404348 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404407 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.404673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.505902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.506009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.506044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.507070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.507326 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.528663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:33 crc kubenswrapper[4985]: I0128 18:29:33.674717 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:36 crc kubenswrapper[4985]: I0128 18:29:36.549649 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h"] Jan 28 18:29:36 crc kubenswrapper[4985]: I0128 18:29:36.945149 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerStarted","Data":"c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d"} Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.970295 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="ef14c315d552a784bc32f0bc199fe21bbf5063004c3778e86d59511172269245" exitCode=0 Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.970476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"ef14c315d552a784bc32f0bc199fe21bbf5063004c3778e86d59511172269245"} Jan 28 18:29:39 crc kubenswrapper[4985]: I0128 18:29:39.973155 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.102999 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.104824 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.121870 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220601 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.220966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322737 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.322844 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.323505 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.323546 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.343236 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"redhat-marketplace-sg7xt\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.421321 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.863865 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:29:40 crc kubenswrapper[4985]: W0128 18:29:40.868964 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod444d0c9f_09e7_49e1_9f49_6653d2f9befa.slice/crio-f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba WatchSource:0}: Error finding container f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba: Status 404 returned error can't find the container with id f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba Jan 28 18:29:40 crc kubenswrapper[4985]: I0128 18:29:40.979371 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerStarted","Data":"f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba"} Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.186469 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.186796 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.993083 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f" exitCode=0 Jan 28 18:29:41 crc kubenswrapper[4985]: I0128 18:29:41.993156 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f"} Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.048515 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="7229b8e58e9f7d6a84deea35c60f4407e557d28ea8eff0884b1dd6a2760ecd69" exitCode=0 Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.049154 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"7229b8e58e9f7d6a84deea35c60f4407e557d28ea8eff0884b1dd6a2760ecd69"} Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.055148 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e" exitCode=0 Jan 28 18:29:49 crc kubenswrapper[4985]: I0128 18:29:49.055505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e"} Jan 28 18:29:50 crc kubenswrapper[4985]: I0128 18:29:50.064974 4985 generic.go:334] "Generic (PLEG): container finished" podID="096a6287-784c-410e-99c8-16188796d2ea" containerID="c1666e69c07f5a48bd38aebe27db263382fb3f97bfc9d5c4f5eba14abbf0aecd" exitCode=0 Jan 28 18:29:50 crc kubenswrapper[4985]: I0128 18:29:50.065033 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"c1666e69c07f5a48bd38aebe27db263382fb3f97bfc9d5c4f5eba14abbf0aecd"} Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.073647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerStarted","Data":"8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e"} Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.090203 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-sg7xt" podStartSLOduration=3.062685139 podStartE2EDuration="11.090189284s" podCreationTimestamp="2026-01-28 18:29:40 +0000 UTC" firstStartedPulling="2026-01-28 18:29:42.1679832 +0000 UTC m=+992.994546021" lastFinishedPulling="2026-01-28 18:29:50.195487345 +0000 UTC m=+1001.022050166" observedRunningTime="2026-01-28 18:29:51.088034223 +0000 UTC m=+1001.914597044" watchObservedRunningTime="2026-01-28 18:29:51.090189284 +0000 UTC m=+1001.916752105" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.378765 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426153 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426265 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.426326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") pod \"096a6287-784c-410e-99c8-16188796d2ea\" (UID: \"096a6287-784c-410e-99c8-16188796d2ea\") " Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.427926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle" (OuterVolumeSpecName: "bundle") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.432839 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn" (OuterVolumeSpecName: "kube-api-access-s7stn") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "kube-api-access-s7stn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.442334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util" (OuterVolumeSpecName: "util") pod "096a6287-784c-410e-99c8-16188796d2ea" (UID: "096a6287-784c-410e-99c8-16188796d2ea"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528433 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528473 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7stn\" (UniqueName: \"kubernetes.io/projected/096a6287-784c-410e-99c8-16188796d2ea-kube-api-access-s7stn\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:51 crc kubenswrapper[4985]: I0128 18:29:51.528488 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/096a6287-784c-410e-99c8-16188796d2ea-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.083989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" event={"ID":"096a6287-784c-410e-99c8-16188796d2ea","Type":"ContainerDied","Data":"c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d"} Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.084028 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h" Jan 28 18:29:52 crc kubenswrapper[4985]: I0128 18:29:52.084051 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c993290ec5ddedbf6904755238b7d5ebfa7183fc2581d162c0318393f22c9f3d" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="util" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807934 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="util" Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807946 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807953 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: E0128 18:29:54.807965 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="pull" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.807974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="pull" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.808139 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="096a6287-784c-410e-99c8-16188796d2ea" containerName="extract" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.808709 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811236 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-ql7gj" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.811321 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.828718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.881316 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:54 crc kubenswrapper[4985]: I0128 18:29:54.983496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.010867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fn4ff\" (UniqueName: \"kubernetes.io/projected/e130755a-0d4d-4efd-a08a-a3bda72ff4cf-kube-api-access-fn4ff\") pod \"nmstate-operator-646758c888-ztr6n\" (UID: \"e130755a-0d4d-4efd-a08a-a3bda72ff4cf\") " pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.126756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" Jan 28 18:29:55 crc kubenswrapper[4985]: I0128 18:29:55.358236 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-ztr6n"] Jan 28 18:29:56 crc kubenswrapper[4985]: I0128 18:29:56.115115 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" event={"ID":"e130755a-0d4d-4efd-a08a-a3bda72ff4cf","Type":"ContainerStarted","Data":"0b08347245eeb190ecdac216e6201c9e8dfda0ca2b3c9c7a046d047f32958d75"} Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.569854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.575555 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.590391 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.643908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.644101 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.644145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745824 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.745999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.746386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.746606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.764161 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"community-operators-7sz6k\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:58 crc kubenswrapper[4985]: I0128 18:29:58.900587 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:29:59 crc kubenswrapper[4985]: I0128 18:29:59.413139 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147587 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" exitCode=0 Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.147866 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"cd8f4c0b360f8a01b98642a24d5480d1d28c8d20e2ef03104e449bd3d4e18f02"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.156912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" event={"ID":"e130755a-0d4d-4efd-a08a-a3bda72ff4cf","Type":"ContainerStarted","Data":"e3fd21fb465a6ac7055f72a90b6622ed66f483ee3e1aacc8f27bac8a9f8abea1"} Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.161822 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.163211 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.165144 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.165894 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.217180 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.238321 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-ztr6n" podStartSLOduration=2.322589297 podStartE2EDuration="6.238307177s" podCreationTimestamp="2026-01-28 18:29:54 +0000 UTC" firstStartedPulling="2026-01-28 18:29:55.363322815 +0000 UTC m=+1006.189885636" lastFinishedPulling="2026-01-28 18:29:59.279040705 +0000 UTC m=+1010.105603516" observedRunningTime="2026-01-28 18:30:00.23697769 +0000 UTC m=+1011.063540511" watchObservedRunningTime="2026-01-28 18:30:00.238307177 +0000 UTC m=+1011.064869998" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288647 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.288780 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390355 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.390512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.393320 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.400822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.428058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.428114 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.434703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"collect-profiles-29493750-zsmmm\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.469324 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.479670 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:00 crc kubenswrapper[4985]: W0128 18:30:00.935185 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfca2781_d8d0_4e7e_85c8_d337780059ae.slice/crio-7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b WatchSource:0}: Error finding container 7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b: Status 404 returned error can't find the container with id 7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b Jan 28 18:30:00 crc kubenswrapper[4985]: I0128 18:30:00.944342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 18:30:01 crc kubenswrapper[4985]: I0128 18:30:01.163997 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerStarted","Data":"7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b"} Jan 28 18:30:01 crc kubenswrapper[4985]: I0128 18:30:01.212298 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.174311 4985 generic.go:334] "Generic (PLEG): container finished" podID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerID="0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e" exitCode=0 Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.174355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerDied","Data":"0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e"} Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.939451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.940714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.943001 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-hjdn7" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.943204 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.961684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-gkjzc"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.962943 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.968738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.968869 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.974976 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.976733 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:02 crc kubenswrapper[4985]: I0128 18:30:02.977928 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.002104 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071337 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071482 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071514 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.071645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.071878 4985 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.072025 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair podName:645ec0ef-97a6-4e2f-b691-ffcbcab4eed7 nodeName:}" failed. No retries permitted until 2026-01-28 18:30:03.571999689 +0000 UTC m=+1014.398562510 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-jrf9w" (UID: "645ec0ef-97a6-4e2f-b691-ffcbcab4eed7") : secret "openshift-nmstate-webhook" not found Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.102888 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.109666 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kl8j\" (UniqueName: \"kubernetes.io/projected/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-kube-api-access-4kl8j\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.119934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.125884 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.126183 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-nsd86" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.126229 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.127238 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172735 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172834 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172868 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172941 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.172987 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173023 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173410 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-dbus-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-ovs-socket\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.173876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-nmstate-lock\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.193011 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.195917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2x7rg\" (UniqueName: \"kubernetes.io/projected/05eeb2e4-510c-4b66-addf-efaddce8cfb0-kube-api-access-2x7rg\") pod \"nmstate-metrics-54757c584b-vznlg\" (UID: \"05eeb2e4-510c-4b66-addf-efaddce8cfb0\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.203178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bj2k\" (UniqueName: \"kubernetes.io/projected/8f0319d2-9602-42b4-a3fb-c53bf5d3c244-kube-api-access-4bj2k\") pod \"nmstate-handler-gkjzc\" (UID: \"8f0319d2-9602-42b4-a3fb-c53bf5d3c244\") " pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.275986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276141 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.276974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/b866e710-8894-47da-9251-4118fec613bd-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.277054 4985 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 18:30:03 crc kubenswrapper[4985]: E0128 18:30:03.277093 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert podName:b866e710-8894-47da-9251-4118fec613bd nodeName:}" failed. No retries permitted until 2026-01-28 18:30:03.777079929 +0000 UTC m=+1014.603642750 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-slwkn" (UID: "b866e710-8894-47da-9251-4118fec613bd") : secret "plugin-serving-cert" not found Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.306381 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6b5v\" (UniqueName: \"kubernetes.io/projected/b866e710-8894-47da-9251-4118fec613bd-kube-api-access-d6b5v\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.329956 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.330938 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.360713 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.362586 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.370999 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386227 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386355 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386448 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386498 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386636 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.386656 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497395 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.497961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498600 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.498692 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.499545 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.499642 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.501081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.506233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.508789 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.521449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"console-64878fb8f-ljltp\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.585858 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.599831 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.619895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/645ec0ef-97a6-4e2f-b691-ffcbcab4eed7-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-jrf9w\" (UID: \"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.626934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.662416 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728650 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728841 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.728985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") pod \"dfca2781-d8d0-4e7e-85c8-d337780059ae\" (UID: \"dfca2781-d8d0-4e7e-85c8-d337780059ae\") " Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.731965 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.732669 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2" (OuterVolumeSpecName: "kube-api-access-2p4d2") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "kube-api-access-2p4d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.739244 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume" (OuterVolumeSpecName: "config-volume") pod "dfca2781-d8d0-4e7e-85c8-d337780059ae" (UID: "dfca2781-d8d0-4e7e-85c8-d337780059ae"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.768550 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.768870 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-sg7xt" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" containerID="cri-o://8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" gracePeriod=2 Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830461 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830583 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dfca2781-d8d0-4e7e-85c8-d337780059ae-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830601 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2p4d2\" (UniqueName: \"kubernetes.io/projected/dfca2781-d8d0-4e7e-85c8-d337780059ae-kube-api-access-2p4d2\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.830612 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfca2781-d8d0-4e7e-85c8-d337780059ae-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.835211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/b866e710-8894-47da-9251-4118fec613bd-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-slwkn\" (UID: \"b866e710-8894-47da-9251-4118fec613bd\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:03 crc kubenswrapper[4985]: I0128 18:30:03.971059 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-vznlg"] Jan 28 18:30:03 crc kubenswrapper[4985]: W0128 18:30:03.987297 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05eeb2e4_510c_4b66_addf_efaddce8cfb0.slice/crio-e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84 WatchSource:0}: Error finding container e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84: Status 404 returned error can't find the container with id e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.058027 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.170007 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w"] Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.181619 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.195042 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d2b3a75_cb2e_41a2_9005_a72a8aebb818.slice/crio-5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e WatchSource:0}: Error finding container 5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e: Status 404 returned error can't find the container with id 5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.195544 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod645ec0ef_97a6_4e2f_b691_ffcbcab4eed7.slice/crio-530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f WatchSource:0}: Error finding container 530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f: Status 404 returned error can't find the container with id 530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.218940 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.220950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm" event={"ID":"dfca2781-d8d0-4e7e-85c8-d337780059ae","Type":"ContainerDied","Data":"7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.221010 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8f55fdc601e2cd57f9ab43e7e0a4b1295038583d07418861e2f6a2c180d56b" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.230328 4985 generic.go:334] "Generic (PLEG): container finished" podID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerID="8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" exitCode=0 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.230404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.231623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gkjzc" event={"ID":"8f0319d2-9602-42b4-a3fb-c53bf5d3c244","Type":"ContainerStarted","Data":"55a9a2e0be146cd8425f05f9bf9091b12c0dcc737731c765ee5c74965d814b6b"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.232735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"e6711aa662b53f9a3c008ffb37df7827502e2fc6bed414fa5ba198cfb203da84"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.235724 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" exitCode=0 Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.235770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.273808 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349129 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349544 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.349713 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") pod \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\" (UID: \"444d0c9f-09e7-49e1-9f49-6653d2f9befa\") " Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.352012 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities" (OuterVolumeSpecName: "utilities") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.357958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng" (OuterVolumeSpecName: "kube-api-access-pstng") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "kube-api-access-pstng". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.376392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "444d0c9f-09e7-49e1-9f49-6653d2f9befa" (UID: "444d0c9f-09e7-49e1-9f49-6653d2f9befa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.452939 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.453018 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/444d0c9f-09e7-49e1-9f49-6653d2f9befa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.453086 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pstng\" (UniqueName: \"kubernetes.io/projected/444d0c9f-09e7-49e1-9f49-6653d2f9befa-kube-api-access-pstng\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:04 crc kubenswrapper[4985]: I0128 18:30:04.566991 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn"] Jan 28 18:30:04 crc kubenswrapper[4985]: W0128 18:30:04.570914 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb866e710_8894_47da_9251_4118fec613bd.slice/crio-08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d WatchSource:0}: Error finding container 08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d: Status 404 returned error can't find the container with id 08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.242938 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" event={"ID":"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7","Type":"ContainerStarted","Data":"530b29cee5f7f5a8a342bb33ce184ad39ef8654ff8359f430cccd5a4e812116f"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249533 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-sg7xt" event={"ID":"444d0c9f-09e7-49e1-9f49-6653d2f9befa","Type":"ContainerDied","Data":"f5879c7c7a742df197b5811ff0ab172c046acd6e80827906a012312347cce0ba"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249553 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-sg7xt" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.249603 4985 scope.go:117] "RemoveContainer" containerID="8ad35cae803c470b7bc04f9fe7daa14220aef328cfcdca241aca2cc4781de99e" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.250986 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerStarted","Data":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.251044 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerStarted","Data":"5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.253276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" event={"ID":"b866e710-8894-47da-9251-4118fec613bd","Type":"ContainerStarted","Data":"08ac7aec2af4f6f7dcd626d2d1da9fe5dce4d37eb1ad61ba3d4fb0bbe11f2a0d"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.257463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerStarted","Data":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.279975 4985 scope.go:117] "RemoveContainer" containerID="3b213516d9dcfab58c762cfeccdff8a6d947fb73a1b523f5d00aca85cbafab8e" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.285806 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64878fb8f-ljltp" podStartSLOduration=2.285781718 podStartE2EDuration="2.285781718s" podCreationTimestamp="2026-01-28 18:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:30:05.269779407 +0000 UTC m=+1016.096342248" watchObservedRunningTime="2026-01-28 18:30:05.285781718 +0000 UTC m=+1016.112344559" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.296072 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.309280 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-sg7xt"] Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.311914 4985 scope.go:117] "RemoveContainer" containerID="bb7920b691aef048a369de5325cb19e6651ee07d08167e9693f136f8fd27957f" Jan 28 18:30:05 crc kubenswrapper[4985]: I0128 18:30:05.312093 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7sz6k" podStartSLOduration=2.639251667 podStartE2EDuration="7.312071771s" podCreationTimestamp="2026-01-28 18:29:58 +0000 UTC" firstStartedPulling="2026-01-28 18:30:00.150134198 +0000 UTC m=+1010.976697039" lastFinishedPulling="2026-01-28 18:30:04.822954322 +0000 UTC m=+1015.649517143" observedRunningTime="2026-01-28 18:30:05.308993574 +0000 UTC m=+1016.135556415" watchObservedRunningTime="2026-01-28 18:30:05.312071771 +0000 UTC m=+1016.138634592" Jan 28 18:30:07 crc kubenswrapper[4985]: I0128 18:30:07.272950 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" path="/var/lib/kubelet/pods/444d0c9f-09e7-49e1-9f49-6653d2f9befa/volumes" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.286041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" event={"ID":"645ec0ef-97a6-4e2f-b691-ffcbcab4eed7","Type":"ContainerStarted","Data":"6b381f3165c4388b77a018937ba97684d69b5b201d009ab83290fe218f296818"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.287186 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.287887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-gkjzc" event={"ID":"8f0319d2-9602-42b4-a3fb-c53bf5d3c244","Type":"ContainerStarted","Data":"14d02fbaf84ba0b3756257de3e54645c51e770acf80b650947908cdd2ff23bd5"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.288823 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.291016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"cd9da237246485b2ca7075506e0dcb6c08ef6571d863749756757d4a23d9c606"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.292759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" event={"ID":"b866e710-8894-47da-9251-4118fec613bd","Type":"ContainerStarted","Data":"8f61ae2e19dd8ff4b74cf00847abb484ed986b7e49d0927e9f5ec4ff74395124"} Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.312837 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podStartSLOduration=3.041194574 podStartE2EDuration="6.31281354s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:04.220049211 +0000 UTC m=+1015.046612032" lastFinishedPulling="2026-01-28 18:30:07.491668177 +0000 UTC m=+1018.318230998" observedRunningTime="2026-01-28 18:30:08.304475065 +0000 UTC m=+1019.131037896" watchObservedRunningTime="2026-01-28 18:30:08.31281354 +0000 UTC m=+1019.139376361" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.334349 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-gkjzc" podStartSLOduration=2.30364553 podStartE2EDuration="6.334323827s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:03.426644651 +0000 UTC m=+1014.253207472" lastFinishedPulling="2026-01-28 18:30:07.457322948 +0000 UTC m=+1018.283885769" observedRunningTime="2026-01-28 18:30:08.329393538 +0000 UTC m=+1019.155956359" watchObservedRunningTime="2026-01-28 18:30:08.334323827 +0000 UTC m=+1019.160886648" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.351103 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-slwkn" podStartSLOduration=2.468855027 podStartE2EDuration="5.35108413s" podCreationTimestamp="2026-01-28 18:30:03 +0000 UTC" firstStartedPulling="2026-01-28 18:30:04.572980865 +0000 UTC m=+1015.399543686" lastFinishedPulling="2026-01-28 18:30:07.455209968 +0000 UTC m=+1018.281772789" observedRunningTime="2026-01-28 18:30:08.341719835 +0000 UTC m=+1019.168282656" watchObservedRunningTime="2026-01-28 18:30:08.35108413 +0000 UTC m=+1019.177646951" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.901106 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.901210 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:08 crc kubenswrapper[4985]: I0128 18:30:08.965586 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:09 crc kubenswrapper[4985]: I0128 18:30:09.347952 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.160966 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.312150 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" event={"ID":"05eeb2e4-510c-4b66-addf-efaddce8cfb0","Type":"ContainerStarted","Data":"f552673294749f53337e4e8377ebec4b9bfdb34cb827a4f3dc0232acf5bfa0d0"} Jan 28 18:30:10 crc kubenswrapper[4985]: I0128 18:30:10.334883 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-vznlg" podStartSLOduration=2.259466174 podStartE2EDuration="8.334865077s" podCreationTimestamp="2026-01-28 18:30:02 +0000 UTC" firstStartedPulling="2026-01-28 18:30:03.990516161 +0000 UTC m=+1014.817078982" lastFinishedPulling="2026-01-28 18:30:10.065915064 +0000 UTC m=+1020.892477885" observedRunningTime="2026-01-28 18:30:10.328897338 +0000 UTC m=+1021.155460159" watchObservedRunningTime="2026-01-28 18:30:10.334865077 +0000 UTC m=+1021.161427898" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185847 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185923 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.185982 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.186952 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.187040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" gracePeriod=600 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.323990 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" exitCode=0 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab"} Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324470 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7sz6k" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" containerID="cri-o://1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" gracePeriod=2 Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.324544 4985 scope.go:117] "RemoveContainer" containerID="adb4c0ed7f790cd18a413d636ed6bf707c0edf095d524face3ee33b0664e4ff2" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.759621 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891589 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.891686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") pod \"07c652ff-94af-4252-802d-06c695e40bfb\" (UID: \"07c652ff-94af-4252-802d-06c695e40bfb\") " Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.892589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities" (OuterVolumeSpecName: "utilities") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.896515 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv" (OuterVolumeSpecName: "kube-api-access-zq5mv") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "kube-api-access-zq5mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.948102 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07c652ff-94af-4252-802d-06c695e40bfb" (UID: "07c652ff-94af-4252-802d-06c695e40bfb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993658 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993703 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07c652ff-94af-4252-802d-06c695e40bfb-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:11 crc kubenswrapper[4985]: I0128 18:30:11.993720 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zq5mv\" (UniqueName: \"kubernetes.io/projected/07c652ff-94af-4252-802d-06c695e40bfb-kube-api-access-zq5mv\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337722 4985 generic.go:334] "Generic (PLEG): container finished" podID="07c652ff-94af-4252-802d-06c695e40bfb" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" exitCode=0 Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337813 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7sz6k" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7sz6k" event={"ID":"07c652ff-94af-4252-802d-06c695e40bfb","Type":"ContainerDied","Data":"cd8f4c0b360f8a01b98642a24d5480d1d28c8d20e2ef03104e449bd3d4e18f02"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.337927 4985 scope.go:117] "RemoveContainer" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.341090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.359078 4985 scope.go:117] "RemoveContainer" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.393506 4985 scope.go:117] "RemoveContainer" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.394094 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.399878 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7sz6k"] Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.413532 4985 scope.go:117] "RemoveContainer" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.414191 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": container with ID starting with 1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315 not found: ID does not exist" containerID="1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414231 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315"} err="failed to get container status \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": rpc error: code = NotFound desc = could not find container \"1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315\": container with ID starting with 1b84447de323a21b165abedbc3b5618a47269ec8a3c1ada3bf970d639351b315 not found: ID does not exist" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414284 4985 scope.go:117] "RemoveContainer" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.414756 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": container with ID starting with 5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a not found: ID does not exist" containerID="5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414789 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a"} err="failed to get container status \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": rpc error: code = NotFound desc = could not find container \"5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a\": container with ID starting with 5acace34989efc6c0f15b3fab256d694e626dd1d718ae4f3ac706f3f9a92bb4a not found: ID does not exist" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.414810 4985 scope.go:117] "RemoveContainer" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: E0128 18:30:12.415124 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": container with ID starting with f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8 not found: ID does not exist" containerID="f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8" Jan 28 18:30:12 crc kubenswrapper[4985]: I0128 18:30:12.415152 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8"} err="failed to get container status \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": rpc error: code = NotFound desc = could not find container \"f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8\": container with ID starting with f72ed3f0e598cb245b59b11a3eb819a37aa2fafcc1146b5f07eb5720325e68c8 not found: ID does not exist" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.273997 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c652ff-94af-4252-802d-06c695e40bfb" path="/var/lib/kubelet/pods/07c652ff-94af-4252-802d-06c695e40bfb/volumes" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.403344 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.662914 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.662999 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:13 crc kubenswrapper[4985]: I0128 18:30:13.671235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:14 crc kubenswrapper[4985]: I0128 18:30:14.360681 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:30:14 crc kubenswrapper[4985]: I0128 18:30:14.417083 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:23 crc kubenswrapper[4985]: I0128 18:30:23.637419 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.485556 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-cd8f6d96f-p5cf4" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" containerID="cri-o://12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" gracePeriod=15 Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.661499 4985 patch_prober.go:28] interesting pod/console-cd8f6d96f-p5cf4 container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.85:8443/health\": dial tcp 10.217.0.85:8443: connect: connection refused" start-of-body= Jan 28 18:30:39 crc kubenswrapper[4985]: I0128 18:30:39.661955 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-cd8f6d96f-p5cf4" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" probeResult="failure" output="Get \"https://10.217.0.85:8443/health\": dial tcp 10.217.0.85:8443: connect: connection refused" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.124720 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125061 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125076 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125114 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125122 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125140 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-utilities" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125178 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125186 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125215 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125228 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125236 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.125272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="extract-content" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125483 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" containerName="collect-profiles" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125500 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c652ff-94af-4252-802d-06c695e40bfb" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.125512 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="444d0c9f-09e7-49e1-9f49-6653d2f9befa" containerName="registry-server" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.129852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.133682 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.134934 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294082 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.294223 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.395867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.395971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396481 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.396479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.414017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.446364 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.538140 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cd8f6d96f-p5cf4_a056a5e7-3897-4712-960c-e0211c7b3062/console/0.log" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.538205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588096 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-cd8f6d96f-p5cf4_a056a5e7-3897-4712-960c-e0211c7b3062/console/0.log" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588155 4985 generic.go:334] "Generic (PLEG): container finished" podID="a056a5e7-3897-4712-960c-e0211c7b3062" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" exitCode=2 Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerDied","Data":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-cd8f6d96f-p5cf4" event={"ID":"a056a5e7-3897-4712-960c-e0211c7b3062","Type":"ContainerDied","Data":"6757ef85c9af6b8087e2bbaecccf725d4d9f1d7a4e12622260f4ddbd98525b61"} Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588242 4985 scope.go:117] "RemoveContainer" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.588258 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-cd8f6d96f-p5cf4" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.624609 4985 scope.go:117] "RemoveContainer" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: E0128 18:30:40.639212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": container with ID starting with 12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55 not found: ID does not exist" containerID="12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.639324 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55"} err="failed to get container status \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": rpc error: code = NotFound desc = could not find container \"12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55\": container with ID starting with 12a4e531f47df603923a5c50f4490e7a862f4f0f92f1d7124cce85b77ca25e55 not found: ID does not exist" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706624 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706657 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.706701 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") pod \"a056a5e7-3897-4712-960c-e0211c7b3062\" (UID: \"a056a5e7-3897-4712-960c-e0211c7b3062\") " Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.709003 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.710529 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca" (OuterVolumeSpecName: "service-ca") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.710877 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config" (OuterVolumeSpecName: "console-config") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.711200 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.723665 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.727469 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.732392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v" (OuterVolumeSpecName: "kube-api-access-vb29v") pod "a056a5e7-3897-4712-960c-e0211c7b3062" (UID: "a056a5e7-3897-4712-960c-e0211c7b3062"). InnerVolumeSpecName "kube-api-access-vb29v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808570 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808601 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808636 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808796 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb29v\" (UniqueName: \"kubernetes.io/projected/a056a5e7-3897-4712-960c-e0211c7b3062-kube-api-access-vb29v\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808812 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808823 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a056a5e7-3897-4712-960c-e0211c7b3062-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.808830 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a056a5e7-3897-4712-960c-e0211c7b3062-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.917141 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.923743 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-cd8f6d96f-p5cf4"] Jan 28 18:30:40 crc kubenswrapper[4985]: I0128 18:30:40.953589 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw"] Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.276469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" path="/var/lib/kubelet/pods/a056a5e7-3897-4712-960c-e0211c7b3062/volumes" Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607547 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="f01564deafeadd6b998299c4c5ab42888fcd5f692a0e41851fa650ff19085772" exitCode=0 Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"f01564deafeadd6b998299c4c5ab42888fcd5f692a0e41851fa650ff19085772"} Jan 28 18:30:41 crc kubenswrapper[4985]: I0128 18:30:41.607669 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerStarted","Data":"d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70"} Jan 28 18:30:44 crc kubenswrapper[4985]: I0128 18:30:44.632638 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="2e43827cfcb704b295c3dc551b2d4faca86ff7e70beb4fc6babf08be4f0b6f9f" exitCode=0 Jan 28 18:30:44 crc kubenswrapper[4985]: I0128 18:30:44.632694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"2e43827cfcb704b295c3dc551b2d4faca86ff7e70beb4fc6babf08be4f0b6f9f"} Jan 28 18:30:45 crc kubenswrapper[4985]: I0128 18:30:45.643018 4985 generic.go:334] "Generic (PLEG): container finished" podID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerID="84b172f9348b7b34fa12131848f32c49d5d898b4bb06d7fa4c0b794dd9d81624" exitCode=0 Jan 28 18:30:45 crc kubenswrapper[4985]: I0128 18:30:45.643095 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"84b172f9348b7b34fa12131848f32c49d5d898b4bb06d7fa4c0b794dd9d81624"} Jan 28 18:30:46 crc kubenswrapper[4985]: I0128 18:30:46.920064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004732 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.004957 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") pod \"9ec863bb-8b63-4362-9bc6-93c91eebec21\" (UID: \"9ec863bb-8b63-4362-9bc6-93c91eebec21\") " Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.005825 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle" (OuterVolumeSpecName: "bundle") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.010480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf" (OuterVolumeSpecName: "kube-api-access-tpkcf") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "kube-api-access-tpkcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.107290 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tpkcf\" (UniqueName: \"kubernetes.io/projected/9ec863bb-8b63-4362-9bc6-93c91eebec21-kube-api-access-tpkcf\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.107326 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660015 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" event={"ID":"9ec863bb-8b63-4362-9bc6-93c91eebec21","Type":"ContainerDied","Data":"d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70"} Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660058 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d862596b70179867a2d1d1607ff3f8f4ee055f5aac6c96bf0deaa7806ec19d70" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.660102 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.796153 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util" (OuterVolumeSpecName: "util") pod "9ec863bb-8b63-4362-9bc6-93c91eebec21" (UID: "9ec863bb-8b63-4362-9bc6-93c91eebec21"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:30:47 crc kubenswrapper[4985]: I0128 18:30:47.819238 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9ec863bb-8b63-4362-9bc6-93c91eebec21-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.915973 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.916923 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.916938 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.916988 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="pull" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.916997 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="pull" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.917014 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="util" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917021 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="util" Jan 28 18:30:57 crc kubenswrapper[4985]: E0128 18:30:57.917034 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917041 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917221 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ec863bb-8b63-4362-9bc6-93c91eebec21" containerName="extract" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917263 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a056a5e7-3897-4712-960c-e0211c7b3062" containerName="console" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.917918 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.923906 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.923955 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924165 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924369 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.924460 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-cgp4v" Jan 28 18:30:57 crc kubenswrapper[4985]: I0128 18:30:57.941914 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.086956 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.087234 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.087361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.189283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.194978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-apiservice-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.195022 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c77a825c-f720-48a7-b74f-49b16e3ecbed-webhook-cert\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.213364 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvbtm\" (UniqueName: \"kubernetes.io/projected/c77a825c-f720-48a7-b74f-49b16e3ecbed-kube-api-access-nvbtm\") pod \"metallb-operator-controller-manager-74b956d56f-86jl5\" (UID: \"c77a825c-f720-48a7-b74f-49b16e3ecbed\") " pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.238184 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.241709 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.242767 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.245690 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-p7k28" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.246497 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.246654 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.258402 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.395610 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.396020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.396105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.497821 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.497905 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.498027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.507178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-apiservice-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.519492 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/57ef54a5-9891-4f69-9907-b726d30d4006-webhook-cert\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.524563 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8bqwz\" (UniqueName: \"kubernetes.io/projected/57ef54a5-9891-4f69-9907-b726d30d4006-kube-api-access-8bqwz\") pod \"metallb-operator-webhook-server-fd7b78bd4-c2clz\" (UID: \"57ef54a5-9891-4f69-9907-b726d30d4006\") " pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.619349 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.714366 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5"] Jan 28 18:30:58 crc kubenswrapper[4985]: W0128 18:30:58.717915 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc77a825c_f720_48a7_b74f_49b16e3ecbed.slice/crio-837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014 WatchSource:0}: Error finding container 837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014: Status 404 returned error can't find the container with id 837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014 Jan 28 18:30:58 crc kubenswrapper[4985]: I0128 18:30:58.755432 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"837680f8e9df9b6ba4f1323b1f7c08a49bd0b5e7b486f31a278c00a04e1e8014"} Jan 28 18:30:59 crc kubenswrapper[4985]: I0128 18:30:59.098718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz"] Jan 28 18:30:59 crc kubenswrapper[4985]: I0128 18:30:59.763435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"92e3645c86e6c8b47b14b5900b2700375dc4f20d875058684762005ebe04f0a1"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.811980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.812599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.818592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26"} Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.818840 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.846289 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podStartSLOduration=1.9125520580000002 podStartE2EDuration="6.846271338s" podCreationTimestamp="2026-01-28 18:30:58 +0000 UTC" firstStartedPulling="2026-01-28 18:30:59.113723025 +0000 UTC m=+1069.940285846" lastFinishedPulling="2026-01-28 18:31:04.047442305 +0000 UTC m=+1074.874005126" observedRunningTime="2026-01-28 18:31:04.840814584 +0000 UTC m=+1075.667377405" watchObservedRunningTime="2026-01-28 18:31:04.846271338 +0000 UTC m=+1075.672834159" Jan 28 18:31:04 crc kubenswrapper[4985]: I0128 18:31:04.865915 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podStartSLOduration=4.691004668 podStartE2EDuration="7.865897333s" podCreationTimestamp="2026-01-28 18:30:57 +0000 UTC" firstStartedPulling="2026-01-28 18:30:58.724735793 +0000 UTC m=+1069.551298614" lastFinishedPulling="2026-01-28 18:31:01.899628458 +0000 UTC m=+1072.726191279" observedRunningTime="2026-01-28 18:31:04.865725238 +0000 UTC m=+1075.692288059" watchObservedRunningTime="2026-01-28 18:31:04.865897333 +0000 UTC m=+1075.692460154" Jan 28 18:31:18 crc kubenswrapper[4985]: I0128 18:31:18.626588 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 18:31:38 crc kubenswrapper[4985]: I0128 18:31:38.243522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.015217 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-qlsnv"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.019004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.020598 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-nmf2x" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.021051 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.021262 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.048109 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.048992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.052761 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.074230 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111903 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.111948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.112141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.144981 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-6lq6d"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.147594 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.152423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156114 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156296 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.156919 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-96452" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.169107 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.170228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.174645 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.202176 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217150 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217169 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217500 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217610 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-sockets\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.217810 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-reloader\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.218066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-conf\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.218288 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-frr-startup\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.241991 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-metrics-certs\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.277668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fcqq\" (UniqueName: \"kubernetes.io/projected/66ed71ac-c9a1-4130-bb76-eb5fc111f72a-kube-api-access-4fcqq\") pod \"frr-k8s-qlsnv\" (UID: \"66ed71ac-c9a1-4130-bb76-eb5fc111f72a\") " pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322347 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.322608 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.336800 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.337037 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f6ebe169-8b20-4d94-99b7-96afffcb5118-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.357979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpstv\" (UniqueName: \"kubernetes.io/projected/f6ebe169-8b20-4d94-99b7-96afffcb5118-kube-api-access-tpstv\") pod \"frr-k8s-webhook-server-7df86c4f6c-szgpw\" (UID: \"f6ebe169-8b20-4d94-99b7-96afffcb5118\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.366608 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.423778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.423924 4985 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.423949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.423991 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs podName:5fd77adb-e801-4d3f-ac61-64615952aebd nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.923966354 +0000 UTC m=+1110.750529175 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs") pod "controller-6968d8fdc4-8f79k" (UID: "5fd77adb-e801-4d3f-ac61-64615952aebd") : secret "controller-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.424023 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.424054 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.924043637 +0000 UTC m=+1110.750606458 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.424166 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.425235 4985 secret.go:188] Couldn't get secret metallb-system/speaker-certs-secret: secret "speaker-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.425324 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:39.925304692 +0000 UTC m=+1110.751867593 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "speaker-certs-secret" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.425713 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/b5094b56-07e5-45db-8a13-ce7b931b861e-metallb-excludel2\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.428083 4985 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.444066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-cert\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.459723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q24vv\" (UniqueName: \"kubernetes.io/projected/b5094b56-07e5-45db-8a13-ce7b931b861e-kube-api-access-q24vv\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.459934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nltf\" (UniqueName: \"kubernetes.io/projected/5fd77adb-e801-4d3f-ac61-64615952aebd-kube-api-access-7nltf\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.898081 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw"] Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.933425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.934275 4985 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: E0128 18:31:39.934342 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist podName:b5094b56-07e5-45db-8a13-ce7b931b861e nodeName:}" failed. No retries permitted until 2026-01-28 18:31:40.934323783 +0000 UTC m=+1111.760886604 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist") pod "speaker-6lq6d" (UID: "b5094b56-07e5-45db-8a13-ce7b931b861e") : secret "metallb-memberlist" not found Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.939692 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-metrics-certs\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:39 crc kubenswrapper[4985]: I0128 18:31:39.940346 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5fd77adb-e801-4d3f-ac61-64615952aebd-metrics-certs\") pod \"controller-6968d8fdc4-8f79k\" (UID: \"5fd77adb-e801-4d3f-ac61-64615952aebd\") " pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.084094 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.128357 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"51af1179afefa1598a904c0a9643050740148bf78a9275f20c8b2a7c055d4143"} Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.129201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"f3a7bcc0197afba71a468de099c230d22868b0f1a3690964e343bed3697cbe7d"} Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.512029 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-8f79k"] Jan 28 18:31:40 crc kubenswrapper[4985]: W0128 18:31:40.513632 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fd77adb_e801_4d3f_ac61_64615952aebd.slice/crio-153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0 WatchSource:0}: Error finding container 153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0: Status 404 returned error can't find the container with id 153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0 Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.950928 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.960741 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/b5094b56-07e5-45db-8a13-ce7b931b861e-memberlist\") pod \"speaker-6lq6d\" (UID: \"b5094b56-07e5-45db-8a13-ce7b931b861e\") " pod="metallb-system/speaker-6lq6d" Jan 28 18:31:40 crc kubenswrapper[4985]: I0128 18:31:40.962027 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.138376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"7aae29377de0d10e0129a0002e20c108028714bab9d7458c2227f36aa71a23c1"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"1dde45509cf56844f3ab6d5fbf53d0755eaead1bd66d1b74829a2f7bc7ba0d5a"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"153b4702ddecb2c3c1ad63a137fc9376f7b6fd7aa8b70d51ea947711bcd2e1b0"} Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.141201 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:41 crc kubenswrapper[4985]: I0128 18:31:41.167130 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-8f79k" podStartSLOduration=2.167102738 podStartE2EDuration="2.167102738s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:31:41.155770008 +0000 UTC m=+1111.982332839" watchObservedRunningTime="2026-01-28 18:31:41.167102738 +0000 UTC m=+1111.993665559" Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.158165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"aec67e329e28eb0bf89791a99394df8f02835ef73cc898402236bd17e3427a2f"} Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.158512 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} Jan 28 18:31:42 crc kubenswrapper[4985]: I0128 18:31:42.186572 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-6lq6d" podStartSLOduration=3.186540138 podStartE2EDuration="3.186540138s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:31:42.182539615 +0000 UTC m=+1113.009102446" watchObservedRunningTime="2026-01-28 18:31:42.186540138 +0000 UTC m=+1113.013102969" Jan 28 18:31:43 crc kubenswrapper[4985]: I0128 18:31:43.165980 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.209780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.210537 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.212288 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="a3f390e836420052d8007a8696e14828047253fc5efd7c67ffbe37e8a32cf87f" exitCode=0 Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.212403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"a3f390e836420052d8007a8696e14828047253fc5efd7c67ffbe37e8a32cf87f"} Jan 28 18:31:49 crc kubenswrapper[4985]: I0128 18:31:49.228821 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podStartSLOduration=1.930057504 podStartE2EDuration="10.228798867s" podCreationTimestamp="2026-01-28 18:31:39 +0000 UTC" firstStartedPulling="2026-01-28 18:31:39.91083077 +0000 UTC m=+1110.737393591" lastFinishedPulling="2026-01-28 18:31:48.209572143 +0000 UTC m=+1119.036134954" observedRunningTime="2026-01-28 18:31:49.226085491 +0000 UTC m=+1120.052648392" watchObservedRunningTime="2026-01-28 18:31:49.228798867 +0000 UTC m=+1120.055361688" Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.087522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.220202 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="7b59bc8d188cb60f10839500f4d239e4f82028acc01ea79094bf48b16d196d3f" exitCode=0 Jan 28 18:31:50 crc kubenswrapper[4985]: I0128 18:31:50.220322 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"7b59bc8d188cb60f10839500f4d239e4f82028acc01ea79094bf48b16d196d3f"} Jan 28 18:31:51 crc kubenswrapper[4985]: I0128 18:31:51.229357 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="e26017e0e9bd57074a816c7ac382b620fe7b45a2283cf81b3b79d29fe6ceec1e" exitCode=0 Jan 28 18:31:51 crc kubenswrapper[4985]: I0128 18:31:51.229452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"e26017e0e9bd57074a816c7ac382b620fe7b45a2283cf81b3b79d29fe6ceec1e"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.239894 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"bae530c428949b3d5d3547f623b72611b427961e6e638679792d2edab1b5d06f"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.240193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} Jan 28 18:31:52 crc kubenswrapper[4985]: I0128 18:31:52.240205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} Jan 28 18:31:53 crc kubenswrapper[4985]: I0128 18:31:53.254130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"c9e858ad5d739a82ca8eb06dac2dc8e8d78e9ba2aed560b5b10f7c3c6331d2d3"} Jan 28 18:31:53 crc kubenswrapper[4985]: I0128 18:31:53.254455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"5dd1e59090599b9440555f63a8837cb32977721ba8696f470d0c913549edfbc7"} Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.264455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"a0c445090f577133e74cd752367f1ce2754e4f088f7a54104278f9da1e09484f"} Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.264837 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.287847 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-qlsnv" podStartSLOduration=7.684439962 podStartE2EDuration="16.287830276s" podCreationTimestamp="2026-01-28 18:31:38 +0000 UTC" firstStartedPulling="2026-01-28 18:31:39.629449896 +0000 UTC m=+1110.456012717" lastFinishedPulling="2026-01-28 18:31:48.23284021 +0000 UTC m=+1119.059403031" observedRunningTime="2026-01-28 18:31:54.283319428 +0000 UTC m=+1125.109882249" watchObservedRunningTime="2026-01-28 18:31:54.287830276 +0000 UTC m=+1125.114393097" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.338028 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:54 crc kubenswrapper[4985]: I0128 18:31:54.374640 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:31:59 crc kubenswrapper[4985]: I0128 18:31:59.371813 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 18:32:00 crc kubenswrapper[4985]: I0128 18:32:00.966863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lq6d" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.810402 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.811964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856523 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.856858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-l44jq" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.857406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.865825 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.958985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:03 crc kubenswrapper[4985]: I0128 18:32:03.978050 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"openstack-operator-index-847cx\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:04 crc kubenswrapper[4985]: I0128 18:32:04.176161 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:04 crc kubenswrapper[4985]: I0128 18:32:04.620953 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:04 crc kubenswrapper[4985]: W0128 18:32:04.625039 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c991bfb_875d_4aa7_b36f_08a198a36da9.slice/crio-6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6 WatchSource:0}: Error finding container 6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6: Status 404 returned error can't find the container with id 6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6 Jan 28 18:32:05 crc kubenswrapper[4985]: I0128 18:32:05.384940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerStarted","Data":"6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6"} Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.189122 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.806571 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.808095 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.830454 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:07 crc kubenswrapper[4985]: I0128 18:32:07.923532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.025112 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.047229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4mhj\" (UniqueName: \"kubernetes.io/projected/3314cb32-9bb8-46fd-b28e-5a6e9b779fa7-kube-api-access-v4mhj\") pod \"openstack-operator-index-wnjfp\" (UID: \"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7\") " pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:08 crc kubenswrapper[4985]: I0128 18:32:08.136314 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:09 crc kubenswrapper[4985]: I0128 18:32:09.217365 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-wnjfp"] Jan 28 18:32:09 crc kubenswrapper[4985]: I0128 18:32:09.375055 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qlsnv" Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.186508 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.187092 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:32:11 crc kubenswrapper[4985]: I0128 18:32:11.443680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"fc84769779f63e0226ec33479e7f491d14108554ee38913895f8cd0bd86864d3"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.474388 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.476430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerStarted","Data":"58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef"} Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.476648 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-847cx" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" containerID="cri-o://58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" gracePeriod=2 Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.502700 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-wnjfp" podStartSLOduration=4.959035482 podStartE2EDuration="7.502451459s" podCreationTimestamp="2026-01-28 18:32:07 +0000 UTC" firstStartedPulling="2026-01-28 18:32:11.040688486 +0000 UTC m=+1141.867251317" lastFinishedPulling="2026-01-28 18:32:13.584104453 +0000 UTC m=+1144.410667294" observedRunningTime="2026-01-28 18:32:14.49538576 +0000 UTC m=+1145.321948631" watchObservedRunningTime="2026-01-28 18:32:14.502451459 +0000 UTC m=+1145.329014280" Jan 28 18:32:14 crc kubenswrapper[4985]: I0128 18:32:14.519304 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-847cx" podStartSLOduration=2.566795185 podStartE2EDuration="11.519275024s" podCreationTimestamp="2026-01-28 18:32:03 +0000 UTC" firstStartedPulling="2026-01-28 18:32:04.627563899 +0000 UTC m=+1135.454126720" lastFinishedPulling="2026-01-28 18:32:13.580043728 +0000 UTC m=+1144.406606559" observedRunningTime="2026-01-28 18:32:14.516726752 +0000 UTC m=+1145.343289583" watchObservedRunningTime="2026-01-28 18:32:14.519275024 +0000 UTC m=+1145.345837885" Jan 28 18:32:15 crc kubenswrapper[4985]: I0128 18:32:15.487363 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerID="58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" exitCode=0 Jan 28 18:32:15 crc kubenswrapper[4985]: I0128 18:32:15.487427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerDied","Data":"58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef"} Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.122042 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.201132 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") pod \"0c991bfb-875d-4aa7-b36f-08a198a36da9\" (UID: \"0c991bfb-875d-4aa7-b36f-08a198a36da9\") " Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.206316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w" (OuterVolumeSpecName: "kube-api-access-dmp8w") pod "0c991bfb-875d-4aa7-b36f-08a198a36da9" (UID: "0c991bfb-875d-4aa7-b36f-08a198a36da9"). InnerVolumeSpecName "kube-api-access-dmp8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.303292 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmp8w\" (UniqueName: \"kubernetes.io/projected/0c991bfb-875d-4aa7-b36f-08a198a36da9-kube-api-access-dmp8w\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-847cx" event={"ID":"0c991bfb-875d-4aa7-b36f-08a198a36da9","Type":"ContainerDied","Data":"6e2d00abd3058f3b2d0c276fcb7fb3da696a17ae2a6662ee220589f2fffe64b6"} Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495976 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-847cx" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.495996 4985 scope.go:117] "RemoveContainer" containerID="58f1f3f27d11b00a29a093ee8413d7694f67531cb4a7e3d77e5a61693b957cef" Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.536516 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:16 crc kubenswrapper[4985]: I0128 18:32:16.542429 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-847cx"] Jan 28 18:32:17 crc kubenswrapper[4985]: I0128 18:32:17.275995 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" path="/var/lib/kubelet/pods/0c991bfb-875d-4aa7-b36f-08a198a36da9/volumes" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.137303 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.138080 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.173809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:18 crc kubenswrapper[4985]: I0128 18:32:18.547538 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.237653 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:25 crc kubenswrapper[4985]: E0128 18:32:25.238482 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.238495 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.238645 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c991bfb-875d-4aa7-b36f-08a198a36da9" containerName="registry-server" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.239689 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.246370 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-w5lcz" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.253104 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272416 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272546 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.272600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.373860 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.373946 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.374173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.374959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.375301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.397419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:25 crc kubenswrapper[4985]: I0128 18:32:25.574660 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.021391 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg"] Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.600002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerStarted","Data":"759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8"} Jan 28 18:32:26 crc kubenswrapper[4985]: I0128 18:32:26.600049 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerStarted","Data":"1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e"} Jan 28 18:32:27 crc kubenswrapper[4985]: I0128 18:32:27.611574 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8" exitCode=0 Jan 28 18:32:27 crc kubenswrapper[4985]: I0128 18:32:27.611675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"759ff3ea70b0b4ae7d7d5bff2276f3f6400ffef8d0a0df4486bbe1ab81bdf4a8"} Jan 28 18:32:29 crc kubenswrapper[4985]: I0128 18:32:29.635545 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="2a1420691545df1dbfb468561eab6f368aa72604a8fa49d7c79feb86d8bfb5cc" exitCode=0 Jan 28 18:32:29 crc kubenswrapper[4985]: I0128 18:32:29.635756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"2a1420691545df1dbfb468561eab6f368aa72604a8fa49d7c79feb86d8bfb5cc"} Jan 28 18:32:30 crc kubenswrapper[4985]: I0128 18:32:30.649927 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerID="82078f9a9ef7771cc51696c1cfd3e236e2109c92249b4c20bec63715dcc1d4ab" exitCode=0 Jan 28 18:32:30 crc kubenswrapper[4985]: I0128 18:32:30.650010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"82078f9a9ef7771cc51696c1cfd3e236e2109c92249b4c20bec63715dcc1d4ab"} Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.266994 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.405906 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.406362 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.406541 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") pod \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\" (UID: \"b5e9d40d-8ad9-4602-ac23-7cad303b1696\") " Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.407334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle" (OuterVolumeSpecName: "bundle") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.415813 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v" (OuterVolumeSpecName: "kube-api-access-gw25v") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "kube-api-access-gw25v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.429476 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util" (OuterVolumeSpecName: "util") pod "b5e9d40d-8ad9-4602-ac23-7cad303b1696" (UID: "b5e9d40d-8ad9-4602-ac23-7cad303b1696"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515662 4985 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-util\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515713 4985 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/b5e9d40d-8ad9-4602-ac23-7cad303b1696-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.515736 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw25v\" (UniqueName: \"kubernetes.io/projected/b5e9d40d-8ad9-4602-ac23-7cad303b1696-kube-api-access-gw25v\") on node \"crc\" DevicePath \"\"" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" event={"ID":"b5e9d40d-8ad9-4602-ac23-7cad303b1696","Type":"ContainerDied","Data":"1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e"} Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671341 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d3469dcbbd2221fa466fdc12e464d9ffe30dee105f24ca5c259d7e5823c660e" Jan 28 18:32:32 crc kubenswrapper[4985]: I0128 18:32:32.671433 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.826547 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827567 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827602 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="pull" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827608 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="pull" Jan 28 18:32:37 crc kubenswrapper[4985]: E0128 18:32:37.827622 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="util" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827628 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="util" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.827756 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e9d40d-8ad9-4602-ac23-7cad303b1696" containerName="extract" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.828263 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.830850 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-flwrr" Jan 28 18:32:37 crc kubenswrapper[4985]: I0128 18:32:37.872183 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.002591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.104066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.133865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwbt4\" (UniqueName: \"kubernetes.io/projected/82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62-kube-api-access-lwbt4\") pod \"openstack-operator-controller-init-687c66fd56-xdvhx\" (UID: \"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62\") " pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.150742 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.684221 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx"] Jan 28 18:32:38 crc kubenswrapper[4985]: I0128 18:32:38.722805 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"935b66526b9ec7e30d57989d97030486c3e4a2cdc4b4fecdf7789e423a532d09"} Jan 28 18:32:41 crc kubenswrapper[4985]: I0128 18:32:41.187831 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:32:41 crc kubenswrapper[4985]: I0128 18:32:41.188141 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:32:46 crc kubenswrapper[4985]: I0128 18:32:46.845756 4985 scope.go:117] "RemoveContainer" containerID="0d1f250737c643fbc85140566ed81835e3f4db2d92ec1ed36f15c0c9eb2c030a" Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.835342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e"} Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.836205 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:32:51 crc kubenswrapper[4985]: I0128 18:32:51.901862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podStartSLOduration=2.407590585 podStartE2EDuration="14.901837167s" podCreationTimestamp="2026-01-28 18:32:37 +0000 UTC" firstStartedPulling="2026-01-28 18:32:38.691811297 +0000 UTC m=+1169.518374118" lastFinishedPulling="2026-01-28 18:32:51.186057839 +0000 UTC m=+1182.012620700" observedRunningTime="2026-01-28 18:32:51.885034813 +0000 UTC m=+1182.711597664" watchObservedRunningTime="2026-01-28 18:32:51.901837167 +0000 UTC m=+1182.728399998" Jan 28 18:32:58 crc kubenswrapper[4985]: I0128 18:32:58.154345 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.185797 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.188392 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.188553 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.189569 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.189810 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" gracePeriod=600 Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.991667 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" exitCode=0 Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.991749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093"} Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.992013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} Jan 28 18:33:11 crc kubenswrapper[4985]: I0128 18:33:11.992052 4985 scope.go:117] "RemoveContainer" containerID="040e45270fd174720803f9ffa3b825437d4522dc625dae36be2468e03f889dab" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.552687 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.554599 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.556531 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-hnhrg" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.580108 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.581569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.585090 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-ndlm5" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.597158 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.605424 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.608830 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.610200 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.614545 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-8j87r" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.628127 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.629099 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.632802 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ndrvf" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.639563 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.640836 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.650845 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.652425 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-cmgj7" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.667023 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.686325 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712354 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712520 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.712621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.742930 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.744967 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6dk7\" (UniqueName: \"kubernetes.io/projected/4fa1b302-aad3-4e6e-9cd2-bba65262c1e8-kube-api-access-g6dk7\") pod \"barbican-operator-controller-manager-7f86f8796f-ww4nj\" (UID: \"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8\") " pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.755879 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.758645 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.764957 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-pfg5x" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.789319 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.791308 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.804047 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.804240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-j2s8q" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.819988 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820169 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820203 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.820227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.856717 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xkghb\" (UniqueName: \"kubernetes.io/projected/99893bb5-33ef-4159-bf8f-1c79a58e74d9-kube-api-access-xkghb\") pod \"glance-operator-controller-manager-78fdd796fd-6bdmh\" (UID: \"99893bb5-33ef-4159-bf8f-1c79a58e74d9\") " pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.860996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2qth\" (UniqueName: \"kubernetes.io/projected/cc7f29e1-e6e0-45a0-920a-4b18d8204c65-kube-api-access-p2qth\") pod \"heat-operator-controller-manager-594c8c9d5d-fm7nr\" (UID: \"cc7f29e1-e6e0-45a0-920a-4b18d8204c65\") " pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.861087 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.862374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-k2q85" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872698 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2z62\" (UniqueName: \"kubernetes.io/projected/4dfb4621-d061-4224-8aee-840726565aa3-kube-api-access-b2z62\") pod \"designate-operator-controller-manager-b45d7bf98-75d84\" (UID: \"4dfb4621-d061-4224-8aee-840726565aa3\") " pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.872869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmkzq\" (UniqueName: \"kubernetes.io/projected/7ef21481-ade5-436a-ae3a-f284a7e438d3-kube-api-access-dmkzq\") pod \"cinder-operator-controller-manager-7478f7dbf9-7gfrh\" (UID: \"7ef21481-ade5-436a-ae3a-f284a7e438d3\") " pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.879983 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.884316 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.906165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922200 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922522 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.922830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:25 crc kubenswrapper[4985]: E0128 18:33:25.923130 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:25 crc kubenswrapper[4985]: E0128 18:33:25.923310 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:26.42328407 +0000 UTC m=+1217.249846901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.929557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.930332 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.950333 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.950969 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.953466 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqdn\" (UniqueName: \"kubernetes.io/projected/99b88683-3e0a-4afa-91ab-71feac27fba1-kube-api-access-tdqdn\") pod \"horizon-operator-controller-manager-77d5c5b54f-6skp6\" (UID: \"99b88683-3e0a-4afa-91ab-71feac27fba1\") " pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.965494 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.971918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6n72\" (UniqueName: \"kubernetes.io/projected/697da6ae-2950-468c-82e9-bcb1a1af61e7-kube-api-access-b6n72\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.984314 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:25 crc kubenswrapper[4985]: I0128 18:33:25.985357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.000823 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-gmkq2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.001445 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.020320 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.021408 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.025612 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rkfcv" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.026773 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.027190 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.077320 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.085070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zjt6k\" (UniqueName: \"kubernetes.io/projected/75e682e9-e5a5-47f1-83cc-c8004ebe224a-kube-api-access-zjt6k\") pod \"ironic-operator-controller-manager-598f7747c9-s2n6z\" (UID: \"75e682e9-e5a5-47f1-83cc-c8004ebe224a\") " pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.103100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.111965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.116334 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-4hcfd" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.129758 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.129897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.134639 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.167326 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.191279 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv6lq\" (UniqueName: \"kubernetes.io/projected/b5a0c28d-1434-40f0-8759-d76b65dc2c30-kube-api-access-fv6lq\") pod \"keystone-operator-controller-manager-b8b6d4659-hktv5\" (UID: \"b5a0c28d-1434-40f0-8759-d76b65dc2c30\") " pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.216081 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.217973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.223864 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-n9xjt" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.231701 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.231781 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.269394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mf2c\" (UniqueName: \"kubernetes.io/projected/654a2c56-81a7-4b32-ad1d-c4d60b054b47-kube-api-access-7mf2c\") pod \"manila-operator-controller-manager-78c6999f6f-9lm5f\" (UID: \"654a2c56-81a7-4b32-ad1d-c4d60b054b47\") " pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.292891 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.294117 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.296640 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-dbsgd" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.317879 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.344179 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345311 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345374 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.345462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.349910 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.350982 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.359369 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-5c2rc" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.362161 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.363453 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.366756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.380621 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.380890 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-zdlj6" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.382139 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.390660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2f2vn\" (UniqueName: \"kubernetes.io/projected/9897766d-6497-4d0e-bd9a-ef8e31a08e24-kube-api-access-2f2vn\") pod \"mariadb-operator-controller-manager-6b9fb5fdcb-rbn84\" (UID: \"9897766d-6497-4d0e-bd9a-ef8e31a08e24\") " pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454794 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454840 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.454977 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.455173 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.455223 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.455209058 +0000 UTC m=+1218.281771879 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.467350 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.487268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zlwq\" (UniqueName: \"kubernetes.io/projected/367b6525-0367-437a-9fe3-b2007411f4af-kube-api-access-5zlwq\") pod \"octavia-operator-controller-manager-5f4cd88d46-4smn2\" (UID: \"367b6525-0367-437a-9fe3-b2007411f4af\") " pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.491316 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6pmh\" (UniqueName: \"kubernetes.io/projected/873dc5cd-5c8e-417e-b99a-a52dfcfd701b-kube-api-access-m6pmh\") pod \"neutron-operator-controller-manager-78d58447c5-dlssr\" (UID: \"873dc5cd-5c8e-417e-b99a-a52dfcfd701b\") " pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.497401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.509383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.549365 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.549401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556663 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.556691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.575423 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.576549 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.578454 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-6fcvv" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.581771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctn8h\" (UniqueName: \"kubernetes.io/projected/9c7284ab-b40f-4275-b85e-77aebd660135-kube-api-access-ctn8h\") pod \"nova-operator-controller-manager-7bdb645866-7mtzf\" (UID: \"9c7284ab-b40f-4275-b85e-77aebd660135\") " pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.585603 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.586866 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.589993 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-nw7jf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.600671 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.623827 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.651867 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.659087 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.659245 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.659579 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: E0128 18:33:26.659650 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.159631399 +0000 UTC m=+1217.986194220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.683960 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.686657 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.687600 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5kzw\" (UniqueName: \"kubernetes.io/projected/70329607-4bbe-43ad-bb7a-2b62f26af473-kube-api-access-h5kzw\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.688688 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9wkb5" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.694423 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.695886 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.698671 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-6dpzx" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.710231 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.722018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.727787 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.745087 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.746308 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.748410 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-kfvvt" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.769621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.769671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.789714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.821360 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.822576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.824925 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-gjb5r" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.831921 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.854787 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.856004 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.859744 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.859955 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.860111 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-4bpcw" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.864890 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884763 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.884949 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.885026 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.885352 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.886662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.889554 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-r5w54" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.905628 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.911513 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.928102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g84zp\" (UniqueName: \"kubernetes.io/projected/91971c24-6187-432c-84ba-65dba69b4598-kube-api-access-g84zp\") pod \"placement-operator-controller-manager-79d5ccc684-qn5x9\" (UID: \"91971c24-6187-432c-84ba-65dba69b4598\") " pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.931961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x57sk\" (UniqueName: \"kubernetes.io/projected/50682373-a3d7-491e-84a0-1d5613ee2e8a-kube-api-access-x57sk\") pod \"ovn-operator-controller-manager-6f75f45d54-v5mmf\" (UID: \"50682373-a3d7-491e-84a0-1d5613ee2e8a\") " pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:26 crc kubenswrapper[4985]: I0128 18:33:26.976809 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014170 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014560 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014598 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.014642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.047220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7prf\" (UniqueName: \"kubernetes.io/projected/1310770f-7cb7-4874-b2a0-4ef733911716-kube-api-access-s7prf\") pod \"test-operator-controller-manager-69797bbcbd-xwzkh\" (UID: \"1310770f-7cb7-4874-b2a0-4ef733911716\") " pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.048810 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.050704 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5d8sp\" (UniqueName: \"kubernetes.io/projected/c95374e8-7d41-4a49-add9-7f28196d70eb-kube-api-access-5d8sp\") pod \"swift-operator-controller-manager-547cbdb99f-9kbdr\" (UID: \"c95374e8-7d41-4a49-add9-7f28196d70eb\") " pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.077986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7w4p\" (UniqueName: \"kubernetes.io/projected/359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3-kube-api-access-m7w4p\") pod \"telemetry-operator-controller-manager-74c974475f-b9j67\" (UID: \"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3\") " pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116178 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.116407 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.116554 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.116627 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.616607371 +0000 UTC m=+1218.443170192 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.117382 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.117424 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:27.617413654 +0000 UTC m=+1218.443976475 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.136390 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxbxh\" (UniqueName: \"kubernetes.io/projected/38846228-cec9-4a59-b9bb-c766121dacde-kube-api-access-zxbxh\") pod \"rabbitmq-cluster-operator-manager-668c99d594-7s7s2\" (UID: \"38846228-cec9-4a59-b9bb-c766121dacde\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.137961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2pcn\" (UniqueName: \"kubernetes.io/projected/d4d6e990-839d-4186-9382-1a67922556df-kube-api-access-s2pcn\") pod \"watcher-operator-controller-manager-564965969-xzkhh\" (UID: \"d4d6e990-839d-4186-9382-1a67922556df\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.138000 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsxl\" (UniqueName: \"kubernetes.io/projected/c1e8524e-e047-4872-9ee1-ae4e013f8825-kube-api-access-wmsxl\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.141805 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.144830 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.145048 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" event={"ID":"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8","Type":"ContainerStarted","Data":"1c765d46b3cfb7ae3cdf987f0a72114eba08370d5ed07c2d070bcbfc78236f56"} Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.165108 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.177222 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.217837 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.218053 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.218110 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.218090396 +0000 UTC m=+1219.044653217 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.291911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.306968 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.368455 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.522627 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.524899 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.525129 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.525189 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:29.525172236 +0000 UTC m=+1220.351735057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.554154 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84"] Jan 28 18:33:27 crc kubenswrapper[4985]: W0128 18:33:27.557121 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99893bb5_33ef_4159_bf8f_1c79a58e74d9.slice/crio-5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e WatchSource:0}: Error finding container 5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e: Status 404 returned error can't find the container with id 5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e Jan 28 18:33:27 crc kubenswrapper[4985]: W0128 18:33:27.560921 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4dfb4621_d061_4224_8aee_840726565aa3.slice/crio-9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029 WatchSource:0}: Error finding container 9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029: Status 404 returned error can't find the container with id 9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029 Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.568604 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr"] Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.626722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: I0128 18:33:27.626791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627108 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627179 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.627159685 +0000 UTC m=+1219.453722506 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627108 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:27 crc kubenswrapper[4985]: E0128 18:33:27.627716 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:28.62770208 +0000 UTC m=+1219.454264901 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.122105 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.139274 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150219 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod873dc5cd_5c8e_417e_b99a_a52dfcfd701b.slice/crio-94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965 WatchSource:0}: Error finding container 94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965: Status 404 returned error can't find the container with id 94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965 Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150541 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod654a2c56_81a7_4b32_ad1d_c4d60b054b47.slice/crio-3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5 WatchSource:0}: Error finding container 3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5: Status 404 returned error can't find the container with id 3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5 Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.150840 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9c7284ab_b40f_4275_b85e_77aebd660135.slice/crio-f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea WatchSource:0}: Error finding container f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea: Status 404 returned error can't find the container with id f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.151027 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.151177 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod99b88683_3e0a_4afa_91ab_71feac27fba1.slice/crio-950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed WatchSource:0}: Error finding container 950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed: Status 404 returned error can't find the container with id 950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.160070 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.165131 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod367b6525_0367_437a_9fe3_b2007411f4af.slice/crio-f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f WatchSource:0}: Error finding container f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f: Status 404 returned error can't find the container with id f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.179864 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.180460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" event={"ID":"99893bb5-33ef-4159-bf8f-1c79a58e74d9","Type":"ContainerStarted","Data":"5ed3ee498bd37c360476eb7e76c38d91112c1eb0d1874cd6aceaba58577cae7e"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.189083 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.199455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"38827df845490c23083bfe7ad56408d36b7f133ee4205b5d8f2c508acb6f51bb"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.199693 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.202222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" event={"ID":"4dfb4621-d061-4224-8aee-840726565aa3","Type":"ContainerStarted","Data":"9ec18a9a77ad0fdc4f804273a8abe29c520a02bdab0106a53a2c839719a8c029"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.203797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" event={"ID":"7ef21481-ade5-436a-ae3a-f284a7e438d3","Type":"ContainerStarted","Data":"07b41414d7e1ab56b15b8ff840c83af0b9ece1889e20e7b35e89d692e025a4f6"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.204779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" event={"ID":"75e682e9-e5a5-47f1-83cc-c8004ebe224a","Type":"ContainerStarted","Data":"533a43b63baaef4c48b0595f64bf2da5a0cf4bf59f804a0b873b863aa677d7fc"} Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.241804 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.241983 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.242051 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.242031345 +0000 UTC m=+1221.068594166 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.420611 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.448172 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.463531 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.473768 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.501240 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9897766d_6497_4d0e_bd9a_ef8e31a08e24.slice/crio-f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f WatchSource:0}: Error finding container f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f: Status 404 returned error can't find the container with id f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.501831 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc95374e8_7d41_4a49_add9_7f28196d70eb.slice/crio-9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb WatchSource:0}: Error finding container 9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb: Status 404 returned error can't find the container with id 9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.650291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.650349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650625 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650699 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.650679752 +0000 UTC m=+1221.477242573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650760 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.650789 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:30.650779845 +0000 UTC m=+1221.477342666 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.664906 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.674735 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.685885 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38846228_cec9_4a59_b9bb_c766121dacde.slice/crio-a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a WatchSource:0}: Error finding container a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a: Status 404 returned error can't find the container with id a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.688108 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-xzkhh"] Jan 28 18:33:28 crc kubenswrapper[4985]: I0128 18:33:28.697944 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67"] Jan 28 18:33:28 crc kubenswrapper[4985]: W0128 18:33:28.699840 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod359fd3be_e8b7_4f51_bb1d_a5d8bdc228c3.slice/crio-6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f WatchSource:0}: Error finding container 6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f: Status 404 returned error can't find the container with id 6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.701552 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7prf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xwzkh_openstack-operators(1310770f-7cb7-4874-b2a0-4ef733911716): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.703644 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.712225 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m7w4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-74c974475f-b9j67_openstack-operators(359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.713394 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.719424 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2pcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-xzkhh_openstack-operators(d4d6e990-839d-4186-9382-1a67922556df): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 18:33:28 crc kubenswrapper[4985]: E0128 18:33:28.720633 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.218327 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" event={"ID":"c95374e8-7d41-4a49-add9-7f28196d70eb","Type":"ContainerStarted","Data":"9d7afa29f693b25687d16ef718f2bf55cc4d3cbeec40ff5fce66297d9571afeb"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.220093 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"8bd4c59f1b88139542870f0eac8ceb9141b65af7edd0cfb46e3ef029d2d339e3"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.222403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"a1fd34a0d34d1ee86d4d600619c1244234a4ca8e4e1e0600d5cbaaefe798df5a"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.222696 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.227795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"f56b3d96fddac1a73206f53aabfbfc3690deab36214d567ff1d8b8902021347f"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.239059 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" event={"ID":"654a2c56-81a7-4b32-ad1d-c4d60b054b47","Type":"ContainerStarted","Data":"3226c3e56950335b52dfc2884483dd6cf371022588ebf9eea7dca7bda293b6b5"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.245936 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"1885650fb2939d0a3e8b331c3e371a5feffffd540e2271ca517ab31770e313cf"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.248382 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" event={"ID":"9897766d-6497-4d0e-bd9a-ef8e31a08e24","Type":"ContainerStarted","Data":"f22a32963324473c47accb3f5fce6d50e0a1ff7c411a434f2df61c0d594ee00f"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.250103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" event={"ID":"873dc5cd-5c8e-417e-b99a-a52dfcfd701b","Type":"ContainerStarted","Data":"94846cb5686126f16d6556aa80994aabf816e2a6268715c2a885ea4c0d524965"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.252924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"950783a93d654fc9c4324b80ba1ddbb41213709ccc769713e4f93646fd2a9aed"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.262848 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"6d19c1d188ea416165240754b078175bbfd0bf8f297d5cb78c4a0bc97c7fca7f"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.265948 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.277354 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"797597753d738831804c41e63a07a1ab4d238d1592e2cd57bf33e019b0a8261a"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.277403 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"f5bf03c5437bcca2b7d7f1632d6f9b02968d89997187b55ae146d48ba4b887ea"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.293211 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"b1b03445d0106999db73a6aa3bfa5147243f4a023495cb71ae9b47af73b36b54"} Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.294628 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.298387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"841b1b41f3d001fa1b16fadde23957fb41377241b955ac2022a56af285c60a7e"} Jan 28 18:33:29 crc kubenswrapper[4985]: I0128 18:33:29.590507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.590716 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:29 crc kubenswrapper[4985]: E0128 18:33:29.590763 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:33.590749291 +0000 UTC m=+1224.417312112 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.308486 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.308497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.308559 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.308539526 +0000 UTC m=+1225.135102347 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.343079 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.132:5001/openstack-k8s-operators/telemetry-operator:78376376ba0b23dd44ee177d28d423a994de68bb\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.346406 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.351110 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.718403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:30 crc kubenswrapper[4985]: I0128 18:33:30.718862 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.718796 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.718967 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.718951823 +0000 UTC m=+1225.545514644 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.719114 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:30 crc kubenswrapper[4985]: E0128 18:33:30.719162 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:34.719153049 +0000 UTC m=+1225.545715870 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:33 crc kubenswrapper[4985]: I0128 18:33:33.675233 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:33 crc kubenswrapper[4985]: E0128 18:33:33.675417 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:33 crc kubenswrapper[4985]: E0128 18:33:33.675506 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:41.675482633 +0000 UTC m=+1232.502045474 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.401984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.402219 4985 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.402303 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert podName:70329607-4bbe-43ad-bb7a-2b62f26af473 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.402277262 +0000 UTC m=+1233.228840083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert") pod "openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" (UID: "70329607-4bbe-43ad-bb7a-2b62f26af473") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.809743 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:34 crc kubenswrapper[4985]: I0128 18:33:34.810058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.809916 4985 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810194 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.810175018 +0000 UTC m=+1233.636737839 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "metrics-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810230 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:34 crc kubenswrapper[4985]: E0128 18:33:34.810300 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:42.810285051 +0000 UTC m=+1233.636847872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.573051 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.573847 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-2f2vn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-6b9fb5fdcb-rbn84_openstack-operators(9897766d-6497-4d0e-bd9a-ef8e31a08e24): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.575199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" Jan 28 18:33:41 crc kubenswrapper[4985]: I0128 18:33:41.746598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.746800 4985 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:41 crc kubenswrapper[4985]: E0128 18:33:41.746855 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert podName:697da6ae-2950-468c-82e9-bcb1a1af61e7 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:57.746839125 +0000 UTC m=+1248.573401956 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert") pod "infra-operator-controller-manager-694cf4f878-5zqpj" (UID: "697da6ae-2950-468c-82e9-bcb1a1af61e7") : secret "infra-operator-webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.462767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.476135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/70329607-4bbe-43ad-bb7a-2b62f26af473-cert\") pod \"openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz\" (UID: \"70329607-4bbe-43ad-bb7a-2b62f26af473\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.500241 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:b673f00227298dcfa89abb46f8296a0825add42da41e8a4bf4dd13367c738d84\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.635977 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.872346 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.872412 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.872784 4985 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: E0128 18:33:42.872936 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs podName:c1e8524e-e047-4872-9ee1-ae4e013f8825 nodeName:}" failed. No retries permitted until 2026-01-28 18:33:58.872897527 +0000 UTC m=+1249.699460348 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs") pod "openstack-operator-controller-manager-68b9ccc946-rk65w" (UID: "c1e8524e-e047-4872-9ee1-ae4e013f8825") : secret "webhook-server-cert" not found Jan 28 18:33:42 crc kubenswrapper[4985]: I0128 18:33:42.885013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-metrics-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.912993 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.913453 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x57sk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-6f75f45d54-v5mmf_openstack-operators(50682373-a3d7-491e-84a0-1d5613ee2e8a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:43 crc kubenswrapper[4985]: E0128 18:33:43.914588 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" Jan 28 18:33:44 crc kubenswrapper[4985]: E0128 18:33:44.520420 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:fa46fc14710961e6b4a76a3522dca3aa3cfa71436c7cf7ade533d3712822f327\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.031986 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.032185 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5d8sp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-547cbdb99f-9kbdr_openstack-operators(c95374e8-7d41-4a49-add9-7f28196d70eb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.033434 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" Jan 28 18:33:45 crc kubenswrapper[4985]: E0128 18:33:45.526893 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:445e951df2f21df6d33a466f75917e0f6103052ae751ae11887136e8ab165922\\\"\"" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.838615 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.838868 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5zlwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5f4cd88d46-4smn2_openstack-operators(367b6525-0367-437a-9fe3-b2007411f4af): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:46 crc kubenswrapper[4985]: E0128 18:33:46.840364 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.433190 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.433430 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-g84zp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-79d5ccc684-qn5x9_openstack-operators(91971c24-6187-432c-84ba-65dba69b4598): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.434674 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.556563 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:ed489f21a0c72557d2da5a271808f19b7c7b85ef32fd9f4aa91bdbfc5bca3bdd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.557056 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:013c0ad82d21a21c7eece5cd4b5d5c4b8eb410b6671ac33a6f3fb78c8510811d\\\"\"" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.978789 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.978997 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7mf2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod manila-operator-controller-manager-78c6999f6f-9lm5f_openstack-operators(654a2c56-81a7-4b32-ad1d-c4d60b054b47): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:47 crc kubenswrapper[4985]: E0128 18:33:47.980204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" Jan 28 18:33:48 crc kubenswrapper[4985]: E0128 18:33:48.568766 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/manila-operator@sha256:8bee4480babd6fd8f686e0ba52a304acb6ffb90f09c7c57e7f5df5f7658836d8\\\"\"" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" Jan 28 18:33:49 crc kubenswrapper[4985]: I0128 18:33:49.849955 4985 scope.go:117] "RemoveContainer" containerID="d76435578daceca6b087721392f95b630b5ec8b21a8af1a1238723f593a47a96" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.614707 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.615108 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-m6pmh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-78d58447c5-dlssr_openstack-operators(873dc5cd-5c8e-417e-b99a-a52dfcfd701b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:51 crc kubenswrapper[4985]: E0128 18:33:51.616579 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.211837 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.212081 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p2qth,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-operator-controller-manager-594c8c9d5d-fm7nr_openstack-operators(cc7f29e1-e6e0-45a0-920a-4b18d8204c65): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.213525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.597444 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:816d474f502d730d6a2522a272b0e09a2d579ac63617817655d60c54bda4191e\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" Jan 28 18:33:52 crc kubenswrapper[4985]: E0128 18:33:52.597887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/heat-operator@sha256:2f9a2f064448faebbae58f52d564dc0e8e39bed0fc12bd6b9fe925e42f1b5492\\\"\"" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.997813 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.998041 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tdqdn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-77d5c5b54f-6skp6_openstack-operators(99b88683-3e0a-4afa-91ab-71feac27fba1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:54 crc kubenswrapper[4985]: E0128 18:33:54.999299 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.562215 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.562650 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b2z62,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-b45d7bf98-75d84_openstack-operators(4dfb4621-d061-4224-8aee-840726565aa3): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.563818 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.628457 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:6c88312afa9673f7b72c558368034d7a488ead73080cdcdf581fe85b99263ece\\\"\"" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" Jan 28 18:33:55 crc kubenswrapper[4985]: E0128 18:33:55.628709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:3311e627bcb860d9443592a2c67078417318c9eb77d8ef4d07f9aa7027d46822\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.244170 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.244825 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fv6lq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-b8b6d4659-hktv5_openstack-operators(b5a0c28d-1434-40f0-8759-d76b65dc2c30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.246153 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.637283 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:8e340ff11922b38e811261de96982e1aff5f4eb8f225d1d9f5973025a4fe8349\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.986110 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.986322 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s7prf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-69797bbcbd-xwzkh_openstack-operators(1310770f-7cb7-4874-b2a0-4ef733911716): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:56 crc kubenswrapper[4985]: E0128 18:33:56.987580 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.509888 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.510443 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s2pcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-xzkhh_openstack-operators(d4d6e990-839d-4186-9382-1a67922556df): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:57 crc kubenswrapper[4985]: E0128 18:33:57.511624 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.823595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.832692 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/697da6ae-2950-468c-82e9-bcb1a1af61e7-cert\") pod \"infra-operator-controller-manager-694cf4f878-5zqpj\" (UID: \"697da6ae-2950-468c-82e9-bcb1a1af61e7\") " pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:57 crc kubenswrapper[4985]: I0128 18:33:57.955770 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.905294 4985 scope.go:117] "RemoveContainer" containerID="191c84609dfb2c8268e33648b1fa5d4251ffb2f7286e97b627cb86dee2d94615" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.926039 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.926248 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-zxbxh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-7s7s2_openstack-operators(38846228-cec9-4a59-b9bb-c766121dacde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:33:58 crc kubenswrapper[4985]: E0128 18:33:58.927500 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podUID="38846228-cec9-4a59-b9bb-c766121dacde" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.951417 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:58 crc kubenswrapper[4985]: I0128 18:33:58.959473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/c1e8524e-e047-4872-9ee1-ae4e013f8825-webhook-certs\") pod \"openstack-operator-controller-manager-68b9ccc946-rk65w\" (UID: \"c1e8524e-e047-4872-9ee1-ae4e013f8825\") " pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.256303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.443020 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz"] Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.444163 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70329607_4bbe_43ad_bb7a_2b62f26af473.slice/crio-3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132 WatchSource:0}: Error finding container 3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132: Status 404 returned error can't find the container with id 3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.631200 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj"] Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.654497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w"] Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.669088 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" event={"ID":"75e682e9-e5a5-47f1-83cc-c8004ebe224a","Type":"ContainerStarted","Data":"596b4dba169c9d1346382306092c265742b4366e6f0e6de87ce3064127855dd0"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.669394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.671871 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"3a28077655ae09027099f0e849e32bf28ac0e788d40fd7454cac4924a0de6132"} Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.674966 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod697da6ae_2950_468c_82e9_bcb1a1af61e7.slice/crio-2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930 WatchSource:0}: Error finding container 2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930: Status 404 returned error can't find the container with id 2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.675540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" event={"ID":"4fa1b302-aad3-4e6e-9cd2-bba65262c1e8","Type":"ContainerStarted","Data":"5365af029ad5ded9a998e8f9e1cd3a0cd10f3a5754f748b72b8396f401214696"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.676130 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.678703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.678910 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.681374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" event={"ID":"7ef21481-ade5-436a-ae3a-f284a7e438d3","Type":"ContainerStarted","Data":"b754f63e41c81ccfe7cbc1779be3894eb7b9b60785b05928a0f95f05a01db4aa"} Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.681471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:33:59 crc kubenswrapper[4985]: E0128 18:33:59.682173 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podUID="38846228-cec9-4a59-b9bb-c766121dacde" Jan 28 18:33:59 crc kubenswrapper[4985]: W0128 18:33:59.687291 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc1e8524e_e047_4872_9ee1_ae4e013f8825.slice/crio-9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6 WatchSource:0}: Error finding container 9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6: Status 404 returned error can't find the container with id 9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6 Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.690770 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podStartSLOduration=4.739668547 podStartE2EDuration="34.690748873s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.149190813 +0000 UTC m=+1218.975753634" lastFinishedPulling="2026-01-28 18:33:58.100271139 +0000 UTC m=+1248.926833960" observedRunningTime="2026-01-28 18:33:59.683660163 +0000 UTC m=+1250.510222984" watchObservedRunningTime="2026-01-28 18:33:59.690748873 +0000 UTC m=+1250.517311694" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.718957 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podStartSLOduration=5.664042703 podStartE2EDuration="34.718938749s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.157498665 +0000 UTC m=+1217.984061486" lastFinishedPulling="2026-01-28 18:33:56.212394711 +0000 UTC m=+1247.038957532" observedRunningTime="2026-01-28 18:33:59.718471495 +0000 UTC m=+1250.545034316" watchObservedRunningTime="2026-01-28 18:33:59.718938749 +0000 UTC m=+1250.545501570" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.809133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podStartSLOduration=5.485994197 podStartE2EDuration="34.809111565s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:26.889072167 +0000 UTC m=+1217.715634988" lastFinishedPulling="2026-01-28 18:33:56.212189535 +0000 UTC m=+1247.038752356" observedRunningTime="2026-01-28 18:33:59.779309593 +0000 UTC m=+1250.605872424" watchObservedRunningTime="2026-01-28 18:33:59.809111565 +0000 UTC m=+1250.635674386" Jan 28 18:33:59 crc kubenswrapper[4985]: I0128 18:33:59.812912 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podStartSLOduration=4.883381594 podStartE2EDuration="34.812891651s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.17066188 +0000 UTC m=+1218.997224701" lastFinishedPulling="2026-01-28 18:33:58.100171937 +0000 UTC m=+1248.926734758" observedRunningTime="2026-01-28 18:33:59.802712884 +0000 UTC m=+1250.629275705" watchObservedRunningTime="2026-01-28 18:33:59.812891651 +0000 UTC m=+1250.639454472" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.697130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" event={"ID":"99893bb5-33ef-4159-bf8f-1c79a58e74d9","Type":"ContainerStarted","Data":"233a43b6b8981b47ec5714f819a1eee5418974ea1fc4d83d0b402ba20404e013"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.697516 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.702137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.702493 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.704265 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" event={"ID":"9897766d-6497-4d0e-bd9a-ef8e31a08e24","Type":"ContainerStarted","Data":"244f2175d0f0083282126d17a82f0ff642cfc28ca6ee1538cedf6e4920fb3907"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.704760 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.707520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"2a322700646174507eb4bc8f892ce834e331fdf9781e07517816051ba142d930"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.710646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" event={"ID":"c1e8524e-e047-4872-9ee1-ae4e013f8825","Type":"ContainerStarted","Data":"5f53fa7d92091209441e8e64320cea938b2d017d0c909c4229f125c84c482055"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.710785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" event={"ID":"c1e8524e-e047-4872-9ee1-ae4e013f8825","Type":"ContainerStarted","Data":"9ed0beaeedd642e690ab7450823f7803959f39b403f7d6fdee0e93680f3c49f6"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.711072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.714379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.714641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.716432 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" event={"ID":"c95374e8-7d41-4a49-add9-7f28196d70eb","Type":"ContainerStarted","Data":"1fed3409e13546ceae0b5c7a89f2c6b82737a4ae622cdb4f7150010d61389b1f"} Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.734767 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podStartSLOduration=7.083679323 podStartE2EDuration="35.734744228s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.561281405 +0000 UTC m=+1218.387844226" lastFinishedPulling="2026-01-28 18:33:56.21234627 +0000 UTC m=+1247.038909131" observedRunningTime="2026-01-28 18:34:00.719927639 +0000 UTC m=+1251.546490470" watchObservedRunningTime="2026-01-28 18:34:00.734744228 +0000 UTC m=+1251.561307049" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.748650 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podStartSLOduration=5.048398433 podStartE2EDuration="35.748605439s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.503919238 +0000 UTC m=+1219.330482059" lastFinishedPulling="2026-01-28 18:33:59.204126244 +0000 UTC m=+1250.030689065" observedRunningTime="2026-01-28 18:34:00.739653956 +0000 UTC m=+1251.566216777" watchObservedRunningTime="2026-01-28 18:34:00.748605439 +0000 UTC m=+1251.575168260" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.794007 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podStartSLOduration=34.79396812 podStartE2EDuration="34.79396812s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:34:00.791668555 +0000 UTC m=+1251.618231376" watchObservedRunningTime="2026-01-28 18:34:00.79396812 +0000 UTC m=+1251.620530941" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.850434 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podStartSLOduration=5.356023908 podStartE2EDuration="35.850407323s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.503314241 +0000 UTC m=+1219.329877062" lastFinishedPulling="2026-01-28 18:33:58.997697646 +0000 UTC m=+1249.824260477" observedRunningTime="2026-01-28 18:34:00.820217131 +0000 UTC m=+1251.646779972" watchObservedRunningTime="2026-01-28 18:34:00.850407323 +0000 UTC m=+1251.676970144" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.862850 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podStartSLOduration=5.144762593 podStartE2EDuration="35.862828184s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.456426467 +0000 UTC m=+1219.282989288" lastFinishedPulling="2026-01-28 18:33:59.174492058 +0000 UTC m=+1250.001054879" observedRunningTime="2026-01-28 18:34:00.851199235 +0000 UTC m=+1251.677762046" watchObservedRunningTime="2026-01-28 18:34:00.862828184 +0000 UTC m=+1251.689391005" Jan 28 18:34:00 crc kubenswrapper[4985]: I0128 18:34:00.885809 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podStartSLOduration=5.399967348 podStartE2EDuration="35.885791712s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.711985032 +0000 UTC m=+1219.538547853" lastFinishedPulling="2026-01-28 18:33:59.197809406 +0000 UTC m=+1250.024372217" observedRunningTime="2026-01-28 18:34:00.885577746 +0000 UTC m=+1251.712140567" watchObservedRunningTime="2026-01-28 18:34:00.885791712 +0000 UTC m=+1251.712354533" Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.724764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa"} Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.727111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:34:01 crc kubenswrapper[4985]: I0128 18:34:01.748646 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podStartSLOduration=4.15645547 podStartE2EDuration="36.748623711s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.173017976 +0000 UTC m=+1218.999580797" lastFinishedPulling="2026-01-28 18:34:00.765186217 +0000 UTC m=+1251.591749038" observedRunningTime="2026-01-28 18:34:01.745435151 +0000 UTC m=+1252.571997982" watchObservedRunningTime="2026-01-28 18:34:01.748623711 +0000 UTC m=+1252.575186532" Jan 28 18:34:02 crc kubenswrapper[4985]: I0128 18:34:02.751617 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" event={"ID":"654a2c56-81a7-4b32-ad1d-c4d60b054b47","Type":"ContainerStarted","Data":"1ef3dc985b18a845765f879402221605ba345883a0e78518b5164ff3d2d033a0"} Jan 28 18:34:02 crc kubenswrapper[4985]: I0128 18:34:02.752735 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:34:03 crc kubenswrapper[4985]: I0128 18:34:03.287871 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podStartSLOduration=4.572693103 podStartE2EDuration="38.287850547s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.171193755 +0000 UTC m=+1218.997756576" lastFinishedPulling="2026-01-28 18:34:01.886351199 +0000 UTC m=+1252.712914020" observedRunningTime="2026-01-28 18:34:02.770781959 +0000 UTC m=+1253.597344780" watchObservedRunningTime="2026-01-28 18:34:03.287850547 +0000 UTC m=+1254.114413368" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.884048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.910006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 18:34:05 crc kubenswrapper[4985]: I0128 18:34:05.953747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.349598 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.503063 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.553168 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 18:34:06 crc kubenswrapper[4985]: I0128 18:34:06.732019 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.051673 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.165759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.171105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" Jan 28 18:34:07 crc kubenswrapper[4985]: I0128 18:34:07.294562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 18:34:09 crc kubenswrapper[4985]: I0128 18:34:09.295015 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 18:34:10 crc kubenswrapper[4985]: E0128 18:34:10.266529 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:c8dde42dafd41026ed2e4cfc26efc0fff63c4ba9d31326ae7dc644ccceaafa9d\\\"\"" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.856873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.858032 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.859781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" event={"ID":"873dc5cd-5c8e-417e-b99a-a52dfcfd701b","Type":"ContainerStarted","Data":"6be03048a45e76fc38842b0f2aa2d2749422dcbc025d44c650518ad71eb52fc8"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.860275 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.862602 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.863215 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.865465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" event={"ID":"4dfb4621-d061-4224-8aee-840726565aa3","Type":"ContainerStarted","Data":"80ea51f2e278a8d38f5c2b991ca9c8c9e8dd7d3746d654e3e185b9388c1c038a"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.866038 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.868665 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.869446 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.871208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.871713 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.873152 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280"} Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.873353 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.887024 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podStartSLOduration=35.696836597 podStartE2EDuration="45.887004779s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:59.677338914 +0000 UTC m=+1250.503901735" lastFinishedPulling="2026-01-28 18:34:09.867507096 +0000 UTC m=+1260.694069917" observedRunningTime="2026-01-28 18:34:10.876022619 +0000 UTC m=+1261.702585460" watchObservedRunningTime="2026-01-28 18:34:10.887004779 +0000 UTC m=+1261.713567600" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.895408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podStartSLOduration=3.5892935489999998 podStartE2EDuration="45.895388866s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.569574639 +0000 UTC m=+1218.396137460" lastFinishedPulling="2026-01-28 18:34:09.875669956 +0000 UTC m=+1260.702232777" observedRunningTime="2026-01-28 18:34:10.893741369 +0000 UTC m=+1261.720304190" watchObservedRunningTime="2026-01-28 18:34:10.895388866 +0000 UTC m=+1261.721951687" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.910539 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podStartSLOduration=4.208314645 podStartE2EDuration="45.910510743s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.171965886 +0000 UTC m=+1218.998528707" lastFinishedPulling="2026-01-28 18:34:09.874161984 +0000 UTC m=+1260.700724805" observedRunningTime="2026-01-28 18:34:10.908537557 +0000 UTC m=+1261.735100388" watchObservedRunningTime="2026-01-28 18:34:10.910510743 +0000 UTC m=+1261.737073564" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.934082 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podStartSLOduration=35.513412719 podStartE2EDuration="45.934060727s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:59.446204809 +0000 UTC m=+1250.272767630" lastFinishedPulling="2026-01-28 18:34:09.866852807 +0000 UTC m=+1260.693415638" observedRunningTime="2026-01-28 18:34:10.933384898 +0000 UTC m=+1261.759947719" watchObservedRunningTime="2026-01-28 18:34:10.934060727 +0000 UTC m=+1261.760623548" Jan 28 18:34:10 crc kubenswrapper[4985]: I0128 18:34:10.952033 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podStartSLOduration=4.252199074 podStartE2EDuration="45.952016374s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.172648126 +0000 UTC m=+1218.999210947" lastFinishedPulling="2026-01-28 18:34:09.872465426 +0000 UTC m=+1260.699028247" observedRunningTime="2026-01-28 18:34:10.950393869 +0000 UTC m=+1261.776956700" watchObservedRunningTime="2026-01-28 18:34:10.952016374 +0000 UTC m=+1261.778579195" Jan 28 18:34:11 crc kubenswrapper[4985]: I0128 18:34:11.005827 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podStartSLOduration=3.69946772 podStartE2EDuration="46.005803653s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:27.569440526 +0000 UTC m=+1218.396003347" lastFinishedPulling="2026-01-28 18:34:09.875776469 +0000 UTC m=+1260.702339280" observedRunningTime="2026-01-28 18:34:10.984144411 +0000 UTC m=+1261.810707232" watchObservedRunningTime="2026-01-28 18:34:11.005803653 +0000 UTC m=+1261.832366474" Jan 28 18:34:11 crc kubenswrapper[4985]: I0128 18:34:11.006991 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podStartSLOduration=4.590297669 podStartE2EDuration="46.006983396s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.456416307 +0000 UTC m=+1219.282979128" lastFinishedPulling="2026-01-28 18:34:09.873102044 +0000 UTC m=+1260.699664855" observedRunningTime="2026-01-28 18:34:10.999738272 +0000 UTC m=+1261.826301093" watchObservedRunningTime="2026-01-28 18:34:11.006983396 +0000 UTC m=+1261.833546217" Jan 28 18:34:12 crc kubenswrapper[4985]: E0128 18:34:12.266837 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" Jan 28 18:34:12 crc kubenswrapper[4985]: I0128 18:34:12.889767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500"} Jan 28 18:34:12 crc kubenswrapper[4985]: I0128 18:34:12.906053 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podStartSLOduration=4.177413273 podStartE2EDuration="47.90602914s" podCreationTimestamp="2026-01-28 18:33:25 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.179916111 +0000 UTC m=+1219.006478942" lastFinishedPulling="2026-01-28 18:34:11.908531998 +0000 UTC m=+1262.735094809" observedRunningTime="2026-01-28 18:34:12.905577927 +0000 UTC m=+1263.732140758" watchObservedRunningTime="2026-01-28 18:34:12.90602914 +0000 UTC m=+1263.732591971" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.914959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05"} Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.934038 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.935782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" podStartSLOduration=3.659507582 podStartE2EDuration="49.935761366s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.689270521 +0000 UTC m=+1219.515833342" lastFinishedPulling="2026-01-28 18:34:14.965524305 +0000 UTC m=+1265.792087126" observedRunningTime="2026-01-28 18:34:15.926913636 +0000 UTC m=+1266.753476467" watchObservedRunningTime="2026-01-28 18:34:15.935761366 +0000 UTC m=+1266.762324207" Jan 28 18:34:15 crc kubenswrapper[4985]: I0128 18:34:15.969185 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.138057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.374279 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.382612 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:34:16 crc kubenswrapper[4985]: I0128 18:34:16.513775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 18:34:17 crc kubenswrapper[4985]: I0128 18:34:17.147626 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 18:34:17 crc kubenswrapper[4985]: I0128 18:34:17.965279 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.969760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b"} Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.970509 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:34:21 crc kubenswrapper[4985]: I0128 18:34:21.987768 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podStartSLOduration=2.9858238139999997 podStartE2EDuration="55.987750517s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.701374363 +0000 UTC m=+1219.527937184" lastFinishedPulling="2026-01-28 18:34:21.703301066 +0000 UTC m=+1272.529863887" observedRunningTime="2026-01-28 18:34:21.983180688 +0000 UTC m=+1272.809743519" watchObservedRunningTime="2026-01-28 18:34:21.987750517 +0000 UTC m=+1272.814313348" Jan 28 18:34:22 crc kubenswrapper[4985]: I0128 18:34:22.643418 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 18:34:23 crc kubenswrapper[4985]: I0128 18:34:23.986434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d"} Jan 28 18:34:23 crc kubenswrapper[4985]: I0128 18:34:23.987021 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:34:24 crc kubenswrapper[4985]: I0128 18:34:24.013803 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podStartSLOduration=2.983333863 podStartE2EDuration="58.013777886s" podCreationTimestamp="2026-01-28 18:33:26 +0000 UTC" firstStartedPulling="2026-01-28 18:33:28.719101703 +0000 UTC m=+1219.545664534" lastFinishedPulling="2026-01-28 18:34:23.749545726 +0000 UTC m=+1274.576108557" observedRunningTime="2026-01-28 18:34:24.008094346 +0000 UTC m=+1274.834657167" watchObservedRunningTime="2026-01-28 18:34:24.013777886 +0000 UTC m=+1274.840340717" Jan 28 18:34:26 crc kubenswrapper[4985]: I0128 18:34:26.384827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 18:34:27 crc kubenswrapper[4985]: I0128 18:34:27.311090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 18:34:37 crc kubenswrapper[4985]: I0128 18:34:37.373685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.944684 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.946636 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.952804 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953076 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-sgpwf" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.953603 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 18:34:54 crc kubenswrapper[4985]: I0128 18:34:54.958506 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.015153 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.018410 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.022438 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.029472 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.092542 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.092759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.194823 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.194968 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195069 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195116 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.195165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.196018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.225909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"dnsmasq-dns-675f4bcbfc-z95qg\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.278940 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297605 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.297639 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.298898 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.299778 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.321457 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"dnsmasq-dns-78dd6ddcc-x78r6\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.337965 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.823662 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.839714 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:34:55 crc kubenswrapper[4985]: I0128 18:34:55.939421 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:55 crc kubenswrapper[4985]: W0128 18:34:55.940405 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd902791c_2d1f_4c1d_9351_6ef3788b3b77.slice/crio-726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5 WatchSource:0}: Error finding container 726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5: Status 404 returned error can't find the container with id 726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5 Jan 28 18:34:56 crc kubenswrapper[4985]: I0128 18:34:56.328853 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" event={"ID":"d902791c-2d1f-4c1d-9351-6ef3788b3b77","Type":"ContainerStarted","Data":"726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5"} Jan 28 18:34:56 crc kubenswrapper[4985]: I0128 18:34:56.331408 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" event={"ID":"d572008e-db0e-44d1-af83-a8c9a7f2cf48","Type":"ContainerStarted","Data":"63e8d84c0aba56aa3512a4ac1c8f628871da4e22c66d7cefbfe1bef6df1c6884"} Jan 28 18:34:57 crc kubenswrapper[4985]: I0128 18:34:57.984743 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.027360 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.028992 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.040753 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.180725 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.180880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.181123 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.282849 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.284082 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.284085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.319394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"dnsmasq-dns-666b6646f7-ndmmr\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.374169 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.451170 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.469004 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.473998 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.512782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.598131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.705854 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.706456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.706496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.707355 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.707844 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.727839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"dnsmasq-dns-57d769cc4f-2ltmw\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:58 crc kubenswrapper[4985]: I0128 18:34:58.890101 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:34:59 crc kubenswrapper[4985]: W0128 18:34:59.078922 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1bd09ad3_e6d8_4ee9_b465_139f6de0ae5c.slice/crio-3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b WatchSource:0}: Error finding container 3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b: Status 404 returned error can't find the container with id 3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.087077 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.183866 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.186152 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191184 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191481 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191201 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191549 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-8vf7j" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.191862 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.192077 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.192212 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.199166 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.202291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.211615 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.222699 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.225187 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.233769 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.248497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368215 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368274 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368363 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368402 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368416 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368543 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368604 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368734 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.368925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.369024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.369243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370177 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370307 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370322 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370390 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370450 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.370471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.372312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.372394 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.426691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" event={"ID":"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c","Type":"ContainerStarted","Data":"3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b"} Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.472583 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473872 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473888 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.473988 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474055 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474124 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474147 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474162 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474289 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474310 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474379 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474393 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474453 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474475 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474479 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474774 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.474878 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476400 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476791 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.476857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.478625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.479230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.479534 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480071 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480110 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480107 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce250563889cf210f76b1961aa7444b8cbe0d3f306db896236b924f9bdc2ed03/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.480613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.481613 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.482271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.483993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.484775 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.485793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.488728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.490629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.491746 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.492509 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.491168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.497015 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.504981 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505031 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c775c7dad0eb68939020e6ac39de7a8b8505e50517c4739aca512474a1c5503/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505113 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.505205 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18da3f6437b5d54d0b067e2370e468c4fc3f3bb8be36828902e2b198f7e21ef1/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.512972 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: W0128 18:34:59.521087 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podee74e7b2_a80e_4390_afec_a13db1b25da2.slice/crio-31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376 WatchSource:0}: Error finding container 31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376: Status 404 returned error can't find the container with id 31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376 Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.522959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.527461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.577115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.590430 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.612485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.613164 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.619916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.621731 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.624829 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.624976 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.625112 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zs2dp" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.626334 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.627538 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.627770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.635521 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684662 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684714 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.684972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685031 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685131 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.685167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789539 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789665 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789752 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.789945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.790424 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.790945 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.796550 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.796598 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac8bde78162f1032f95f647174ef8183aa4e0f86240347c6b6b8d4a86e7076a1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.798128 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.798402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.800540 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.800928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.801096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.812606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.828979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.832673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.845947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.859279 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.873834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:34:59 crc kubenswrapper[4985]: I0128 18:34:59.976815 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.396945 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: W0128 18:35:00.398217 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a4c48be_3f2f_4c2d_a0ba_2084caf7c541.slice/crio-210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23 WatchSource:0}: Error finding container 210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23: Status 404 returned error can't find the container with id 210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23 Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.440188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" event={"ID":"ee74e7b2-a80e-4390-afec-a13db1b25da2","Type":"ContainerStarted","Data":"31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376"} Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.442911 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23"} Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.599910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.599964 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.722100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.726179 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.733910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.745846 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-2mt89" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.745851 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.746057 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.750389 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.753436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.814036 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.823621 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.823933 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824065 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824106 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.824294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926219 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926391 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926408 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.926768 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.928514 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kolla-config\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.929788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-generated\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.931520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-config-data-default\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.931805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-operator-scripts\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.937759 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.940809 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.940855 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/241736b2c687c815404498b1a703eac59b60363755cc372daf663a1193acdcd8/globalmount\"" pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.950788 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9jzqz\" (UniqueName: \"kubernetes.io/projected/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-kube-api-access-9jzqz\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.964309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:00 crc kubenswrapper[4985]: I0128 18:35:00.991243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08864e67-424b-4807-88e5-3a7a74922802\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08864e67-424b-4807-88e5-3a7a74922802\") pod \"openstack-galera-0\" (UID: \"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8\") " pod="openstack/openstack-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.076483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.463988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"f0ff3c53025b9ae422df2e7cccc0ec25b7dd495fd74546696ee043e91187bb41"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.469920 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"17211bf5e9b8b8c383ea958cf8ed251d1d40c28a9c6c3e8e814a8d59072a3363"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.475623 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"3743df7761e9f95626d5189d3a604fc7ae4f9d57706f392ce36c256fb508d124"} Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.947879 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.951904 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.955651 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.957561 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-z2wcg" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.957600 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.959222 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 18:35:01 crc kubenswrapper[4985]: I0128 18:35:01.962984 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057522 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057575 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057632 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057678 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.057756 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.163994 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164103 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164197 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164316 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164434 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.164532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.165797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.165907 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.166098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.168766 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.172904 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.172964 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/7a286e86a0ff5e9358de4d53c455c6c79dae9dce989e12f65d2f3cc31213a936/globalmount\"" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.189413 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.192354 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.212978 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lkr7\" (UniqueName: \"kubernetes.io/projected/b8253e52-6b52-45a9-b5d6-680d3dfbebe7-kube-api-access-5lkr7\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.233697 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.235049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.244123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.247778 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.247801 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.248019 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-5tbcp" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.267619 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.321815 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-24245c8a-20b3-4600-8192-4628487d4a9e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-24245c8a-20b3-4600-8192-4628487d4a9e\") pod \"openstack-cell1-galera-0\" (UID: \"b8253e52-6b52-45a9-b5d6-680d3dfbebe7\") " pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370756 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.370846 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.371429 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-config-data\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.372017 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/88fe31db-8414-43ac-b547-fa0278d9508f-kolla-config\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.380067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-combined-ca-bundle\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.381890 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/88fe31db-8414-43ac-b547-fa0278d9508f-memcached-tls-certs\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.398376 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkfwp\" (UniqueName: \"kubernetes.io/projected/88fe31db-8414-43ac-b547-fa0278d9508f-kube-api-access-wkfwp\") pod \"memcached-0\" (UID: \"88fe31db-8414-43ac-b547-fa0278d9508f\") " pod="openstack/memcached-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.582683 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:02 crc kubenswrapper[4985]: I0128 18:35:02.639374 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 18:35:03 crc kubenswrapper[4985]: I0128 18:35:03.999374 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.086411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: W0128 18:35:04.327861 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb8253e52_6b52_45a9_b5d6_680d3dfbebe7.slice/crio-fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466 WatchSource:0}: Error finding container fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466: Status 404 returned error can't find the container with id fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466 Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.329181 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.520695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"b64358d999fa9ab8443bf574a2dc6823b1bf3a2469dbeb9c4025c7e9703bfeed"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.522883 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88fe31db-8414-43ac-b547-fa0278d9508f","Type":"ContainerStarted","Data":"9edcc6df9d4b2dc184587b9332b5a60759478281c8d2ebea39c78338aaa4ce36"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.524127 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"fed6a9175dbcd89ccf358589cad8420ffa9ad9b8667625a1ebb22a73b6a06466"} Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.765177 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.766425 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.781006 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.781349 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-h7kgr" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.824232 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.925947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:04 crc kubenswrapper[4985]: I0128 18:35:04.963465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"kube-state-metrics-0\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " pod="openstack/kube-state-metrics-0" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.126376 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.611957 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.615622 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.622641 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards-sa-dockercfg-7x8tl" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.623560 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-ui-dashboards" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.631371 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.670727 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.670831 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.760024 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.779566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.779722 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: E0128 18:35:05.779862 4985 secret.go:188] Couldn't get secret openshift-operators/observability-ui-dashboards: secret "observability-ui-dashboards" not found Jan 28 18:35:05 crc kubenswrapper[4985]: E0128 18:35:05.779913 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert podName:c9b84394-02f1-4bde-befe-a2a649925c93 nodeName:}" failed. No retries permitted until 2026-01-28 18:35:06.27989722 +0000 UTC m=+1317.106460041 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert") pod "observability-ui-dashboards-66cbf594b5-5w5dn" (UID: "c9b84394-02f1-4bde-befe-a2a649925c93") : secret "observability-ui-dashboards" not found Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.803474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmrr\" (UniqueName: \"kubernetes.io/projected/c9b84394-02f1-4bde-befe-a2a649925c93-kube-api-access-lcmrr\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:05 crc kubenswrapper[4985]: I0128 18:35:05.990726 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.000386 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.030479 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086570 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086731 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086784 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086844 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.086882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.188888 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189153 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.189182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-service-ca\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-oauth-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.191853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-trusted-ca-bundle\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.193814 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.196338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-serving-cert\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.206651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4gpl\" (UniqueName: \"kubernetes.io/projected/6b348b0a-4b9a-4216-adbf-02bcefe1f011-kube-api-access-t4gpl\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.215473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6b348b0a-4b9a-4216-adbf-02bcefe1f011-console-oauth-config\") pod \"console-74779d9b4-2xxwx\" (UID: \"6b348b0a-4b9a-4216-adbf-02bcefe1f011\") " pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.290704 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.294229 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9b84394-02f1-4bde-befe-a2a649925c93-serving-cert\") pod \"observability-ui-dashboards-66cbf594b5-5w5dn\" (UID: \"c9b84394-02f1-4bde-befe-a2a649925c93\") " pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.325513 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.538501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.568785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerStarted","Data":"ec024b4a882b8b962648e5e1cddea01209414bd2598d2c9c73886bd738d4ea3d"} Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.961636 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.967967 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.971923 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wj229" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972178 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972589 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.972188 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973365 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973886 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.973949 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:35:06 crc kubenswrapper[4985]: I0128 18:35:06.988692 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.012786 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115273 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115815 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115920 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.115950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218525 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218670 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218713 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.218741 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.219207 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.219237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220134 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220340 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220390 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.220609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.222829 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.222905 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.223188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224532 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224569 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/48fd35393a2bd67e182a1b8f0b6bc712b43ce2f1ef21a21dd138faec48abf12b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.224623 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.226403 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.226876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.237209 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.239216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.276931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.319172 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.840823 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.842769 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.845978 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.846297 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.846687 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-6gpkf" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.853341 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.855991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.872066 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.893631 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940189 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940698 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940883 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.940985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941063 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941204 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941269 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:07 crc kubenswrapper[4985]: I0128 18:35:07.941356 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043717 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043815 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043914 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043951 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043972 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.043993 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044093 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044110 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044143 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044808 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-lib\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.044999 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-log\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045078 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-log-ovn\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-var-run\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2d1c1ab5-7e43-47cd-8218-3d945574a79c-var-run\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.045433 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/2c181f14-26b7-49f4-9ae0-869d9b291938-etc-ovs\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.047596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2d1c1ab5-7e43-47cd-8218-3d945574a79c-scripts\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.054502 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2c181f14-26b7-49f4-9ae0-869d9b291938-scripts\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.063001 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-ovn-controller-tls-certs\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.065552 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1c1ab5-7e43-47cd-8218-3d945574a79c-combined-ca-bundle\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.066001 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tw4sw\" (UniqueName: \"kubernetes.io/projected/2d1c1ab5-7e43-47cd-8218-3d945574a79c-kube-api-access-tw4sw\") pod \"ovn-controller-9r84t\" (UID: \"2d1c1ab5-7e43-47cd-8218-3d945574a79c\") " pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.066919 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m94j6\" (UniqueName: \"kubernetes.io/projected/2c181f14-26b7-49f4-9ae0-869d9b291938-kube-api-access-m94j6\") pod \"ovn-controller-ovs-f287q\" (UID: \"2c181f14-26b7-49f4-9ae0-869d9b291938\") " pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.176229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.195181 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.725681 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.728271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731437 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731506 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731453 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731712 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.731814 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-zsvtp" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.751739 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865488 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865536 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865789 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865887 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.865970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.866001 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.968423 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972541 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.972632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-config\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.974993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.975743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/6e1c7625-25e1-442f-9f71-5d2a9323306c-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.982776 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.983004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e1c7625-25e1-442f-9f71-5d2a9323306c-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.991591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvt6c\" (UniqueName: \"kubernetes.io/projected/6e1c7625-25e1-442f-9f71-5d2a9323306c-kube-api-access-jvt6c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.992975 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:08 crc kubenswrapper[4985]: I0128 18:35:08.993043 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1577e25a4d037b9f1fe65c5cf6da4068d3343b1c98128ca48e5b0ea8ceecf297/globalmount\"" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:09 crc kubenswrapper[4985]: I0128 18:35:09.041785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-2f5e5d2c-fc16-4cda-b953-ed16f5f0233c\") pod \"ovsdbserver-sb-0\" (UID: \"6e1c7625-25e1-442f-9f71-5d2a9323306c\") " pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:09 crc kubenswrapper[4985]: I0128 18:35:09.066453 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.185936 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.186546 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.480868 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.483387 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.486838 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.486953 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.487203 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-nvkdc" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.487487 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.493292 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630402 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630533 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630760 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.630787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732370 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732409 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732486 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.732511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.733672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.734160 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.734867 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-config\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739587 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739720 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.739762 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/54c46588aa336c2bb13d151debfea516f5088415e77b1327372dc864ad111bd2/globalmount\"" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.740225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.751805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drgds\" (UniqueName: \"kubernetes.io/projected/76ff3fb3-d9e1-41dc-a644-8ac29cb97d11-kube-api-access-drgds\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.770937 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-89b6c9cf-94f1-4689-8631-65bf241dc568\") pod \"ovsdbserver-nb-0\" (UID: \"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11\") " pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:11 crc kubenswrapper[4985]: I0128 18:35:11.818784 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:19 crc kubenswrapper[4985]: I0128 18:35:19.762995 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-74779d9b4-2xxwx"] Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.666720 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.667227 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7zspj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-z95qg_openstack(d572008e-db0e-44d1-af83-a8c9a7f2cf48): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:22 crc kubenswrapper[4985]: E0128 18:35:22.668935 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" podUID="d572008e-db0e-44d1-af83-a8c9a7f2cf48" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.449541 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.450119 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwbpd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x78r6_openstack(d902791c-2d1f-4c1d-9351-6ef3788b3b77): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.451405 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" podUID="d902791c-2d1f-4c1d-9351-6ef3788b3b77" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.491583 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.491749 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cthrq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-ndmmr_openstack(1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.493151 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.515220 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.515431 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qwhbp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2ltmw_openstack(ee74e7b2-a80e-4390-afec-a13db1b25da2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.516623 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" Jan 28 18:35:23 crc kubenswrapper[4985]: W0128 18:35:23.561300 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b348b0a_4b9a_4216_adbf_02bcefe1f011.slice/crio-866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d WatchSource:0}: Error finding container 866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d: Status 404 returned error can't find the container with id 866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.703541 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.729376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" event={"ID":"d572008e-db0e-44d1-af83-a8c9a7f2cf48","Type":"ContainerDied","Data":"63e8d84c0aba56aa3512a4ac1c8f628871da4e22c66d7cefbfe1bef6df1c6884"} Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.729529 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-z95qg" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.734819 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74779d9b4-2xxwx" event={"ID":"6b348b0a-4b9a-4216-adbf-02bcefe1f011","Type":"ContainerStarted","Data":"866f63d57e390eecef2b103a7c3da56e9b87c70bdffada6f5f86f4e18918897d"} Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.736445 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" Jan 28 18:35:23 crc kubenswrapper[4985]: E0128 18:35:23.736828 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.787607 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") pod \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.787788 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") pod \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\" (UID: \"d572008e-db0e-44d1-af83-a8c9a7f2cf48\") " Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.791412 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config" (OuterVolumeSpecName: "config") pod "d572008e-db0e-44d1-af83-a8c9a7f2cf48" (UID: "d572008e-db0e-44d1-af83-a8c9a7f2cf48"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.801581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj" (OuterVolumeSpecName: "kube-api-access-7zspj") pod "d572008e-db0e-44d1-af83-a8c9a7f2cf48" (UID: "d572008e-db0e-44d1-af83-a8c9a7f2cf48"). InnerVolumeSpecName "kube-api-access-7zspj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.890380 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d572008e-db0e-44d1-af83-a8c9a7f2cf48-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:23 crc kubenswrapper[4985]: I0128 18:35:23.890411 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zspj\" (UniqueName: \"kubernetes.io/projected/d572008e-db0e-44d1-af83-a8c9a7f2cf48-kube-api-access-7zspj\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.100865 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.106204 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-z95qg"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.290714 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.307498 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.343067 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc9b84394_02f1_4bde_befe_a2a649925c93.slice/crio-97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e WatchSource:0}: Error finding container 97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e: Status 404 returned error can't find the container with id 97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.352055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505323 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505719 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.505862 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") pod \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\" (UID: \"d902791c-2d1f-4c1d-9351-6ef3788b3b77\") " Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.506399 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config" (OuterVolumeSpecName: "config") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.506764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.508948 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.510420 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd" (OuterVolumeSpecName: "kube-api-access-zwbpd") pod "d902791c-2d1f-4c1d-9351-6ef3788b3b77" (UID: "d902791c-2d1f-4c1d-9351-6ef3788b3b77"). InnerVolumeSpecName "kube-api-access-zwbpd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609108 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609141 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwbpd\" (UniqueName: \"kubernetes.io/projected/d902791c-2d1f-4c1d-9351-6ef3788b3b77-kube-api-access-zwbpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.609152 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d902791c-2d1f-4c1d-9351-6ef3788b3b77-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.753668 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" event={"ID":"d902791c-2d1f-4c1d-9351-6ef3788b3b77","Type":"ContainerDied","Data":"726d39ad443f4cf7528eaa7e16886673ba8250d6c2d954f18e44637adfce94f5"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.754069 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x78r6" Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.777770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t" event={"ID":"2d1c1ab5-7e43-47cd-8218-3d945574a79c","Type":"ContainerStarted","Data":"ebce52a94b4fb29c30b89c997e292645481163c57e0edf829e59a0a3b4cc6094"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.782802 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.787065 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" event={"ID":"c9b84394-02f1-4bde-befe-a2a649925c93","Type":"ContainerStarted","Data":"97d7c04ac820f964fa6642f81afb510cfa4d81e3a4c59a4261b946d8482d0f3e"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.790243 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"076cb278f179a7d28ea480b3e3ec46d4a5cc5412e18855f107c2554883d7d67c"} Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.854126 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.881120 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x78r6"] Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.896962 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.990980 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96162e6f_966d_438d_9362_ef03abc4b277.slice/crio-e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22 WatchSource:0}: Error finding container e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22: Status 404 returned error can't find the container with id e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22 Jan 28 18:35:24 crc kubenswrapper[4985]: I0128 18:35:24.991933 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-f287q"] Jan 28 18:35:24 crc kubenswrapper[4985]: W0128 18:35:24.997327 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2c181f14_26b7_49f4_9ae0_869d9b291938.slice/crio-ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9 WatchSource:0}: Error finding container ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9: Status 404 returned error can't find the container with id ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9 Jan 28 18:35:25 crc kubenswrapper[4985]: W0128 18:35:25.013366 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76ff3fb3_d9e1_41dc_a644_8ac29cb97d11.slice/crio-8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406 WatchSource:0}: Error finding container 8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406: Status 404 returned error can't find the container with id 8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406 Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.277398 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d572008e-db0e-44d1-af83-a8c9a7f2cf48" path="/var/lib/kubelet/pods/d572008e-db0e-44d1-af83-a8c9a7f2cf48/volumes" Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.277787 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d902791c-2d1f-4c1d-9351-6ef3788b3b77" path="/var/lib/kubelet/pods/d902791c-2d1f-4c1d-9351-6ef3788b3b77/volumes" Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.800328 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-74779d9b4-2xxwx" event={"ID":"6b348b0a-4b9a-4216-adbf-02bcefe1f011","Type":"ContainerStarted","Data":"64451822b6a5d78bf7c6fef9ea73354b476e0858e3dd3396503a08a9645b7247"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.803028 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"ca0bb8b5399b511a513e3b1f1d114eeeb939d9fe220f62c4ae70ed6aff99afb9"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.804285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"8abe09c7604dfb391e40de5a4e3d7ff05d0fc7455a2e80d39a82d081f4c22406"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.806197 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.808002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22"} Jan 28 18:35:25 crc kubenswrapper[4985]: I0128 18:35:25.825612 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-74779d9b4-2xxwx" podStartSLOduration=20.825594407 podStartE2EDuration="20.825594407s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:25.817435476 +0000 UTC m=+1336.643998297" watchObservedRunningTime="2026-01-28 18:35:25.825594407 +0000 UTC m=+1336.652157228" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.326025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.326178 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.333169 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.819708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.827614 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 18:35:26 crc kubenswrapper[4985]: I0128 18:35:26.963723 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.832497 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.837145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.842424 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"88fe31db-8414-43ac-b547-fa0278d9508f","Type":"ContainerStarted","Data":"b2ceb9916f921708e12af47eab44ac983832d4dd7d69425eda27d0fb98bed8c0"} Jan 28 18:35:27 crc kubenswrapper[4985]: I0128 18:35:27.888747 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=5.657164659 podStartE2EDuration="25.888721833s" podCreationTimestamp="2026-01-28 18:35:02 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.100441045 +0000 UTC m=+1314.927003866" lastFinishedPulling="2026-01-28 18:35:24.331998219 +0000 UTC m=+1335.158561040" observedRunningTime="2026-01-28 18:35:27.884141174 +0000 UTC m=+1338.710704005" watchObservedRunningTime="2026-01-28 18:35:27.888721833 +0000 UTC m=+1338.715284664" Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.857981 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a"} Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.866482 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8"} Jan 28 18:35:28 crc kubenswrapper[4985]: I0128 18:35:28.866947 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 18:35:29 crc kubenswrapper[4985]: I0128 18:35:29.878313 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerStarted","Data":"926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29"} Jan 28 18:35:29 crc kubenswrapper[4985]: I0128 18:35:29.880285 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.222049 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=4.539399587 podStartE2EDuration="27.22202594s" podCreationTimestamp="2026-01-28 18:35:04 +0000 UTC" firstStartedPulling="2026-01-28 18:35:05.773817968 +0000 UTC m=+1316.600380789" lastFinishedPulling="2026-01-28 18:35:28.456444331 +0000 UTC m=+1339.283007142" observedRunningTime="2026-01-28 18:35:29.914686932 +0000 UTC m=+1340.741249753" watchObservedRunningTime="2026-01-28 18:35:31.22202594 +0000 UTC m=+1342.048588761" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.230235 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.231924 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.242602 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304330 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304993 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305096 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.305144 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.304375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915466 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915689 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915717 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.915746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.916042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovn-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.919755 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/d67712df-b1fe-463f-9a6c-c0591aa6cec2-ovs-rundir\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.920129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d67712df-b1fe-463f-9a6c-c0591aa6cec2-config\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.928620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-combined-ca-bundle\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.947029 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d67712df-b1fe-463f-9a6c-c0591aa6cec2-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:31 crc kubenswrapper[4985]: I0128 18:35:31.956190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lll8z\" (UniqueName: \"kubernetes.io/projected/d67712df-b1fe-463f-9a6c-c0591aa6cec2-kube-api-access-lll8z\") pod \"ovn-controller-metrics-vsdt5\" (UID: \"d67712df-b1fe-463f-9a6c-c0591aa6cec2\") " pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.061522 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.097744 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.139131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.139306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.142059 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.176280 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-vsdt5" Jan 28 18:35:32 crc kubenswrapper[4985]: I0128 18:35:32.300679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331025 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331096 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331190 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.331230 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.359464 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.361552 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.369368 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.391167 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440150 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440297 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440375 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440726 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440810 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.440989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441045 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441331 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.441456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.442097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.475321 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"dnsmasq-dns-6bc7876d45-kf7j5\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543738 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.543785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.544285 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.544529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545033 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545342 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.545987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.565051 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dnsmasq-dns-8554648995-sbd6h\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.651643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.717678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:32.769867 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.157481 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.184649 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.213549 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.215909 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.237097 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.309924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310159 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310211 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.310283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413682 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.413764 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.415106 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.415524 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.416133 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.416340 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.444461 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"dnsmasq-dns-b8fbc5445-f4mq4\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:35.536550 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.134297 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146508 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146615 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.146758 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") pod \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\" (UID: \"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.147961 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config" (OuterVolumeSpecName: "config") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.149521 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.179610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq" (OuterVolumeSpecName: "kube-api-access-cthrq") pod "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" (UID: "1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c"). InnerVolumeSpecName "kube-api-access-cthrq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252800 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cthrq\" (UniqueName: \"kubernetes.io/projected/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-kube-api-access-cthrq\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252848 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.252862 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.284037 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.291048 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.291191 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295578 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295629 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295582 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.295902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-szwvs" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.359691 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360153 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360543 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360586 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.360638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462326 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462436 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462456 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.462638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462685 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462708 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.462764 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:37.962744361 +0000 UTC m=+1348.789307182 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.463144 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-lock\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.463622 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-cache\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.469778 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.469821 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/f34bf9770dd49758400121ece696bba237212777a54e7b942c1c852077ee2a45/globalmount\"" pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.503233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-6d5c6d43-4d98-4842-ac9d-f3b12098d1f0\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.511428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-922sb\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-kube-api-access-922sb\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.511437 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.794844 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.868890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869461 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config" (OuterVolumeSpecName: "config") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.869601 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") pod \"ee74e7b2-a80e-4390-afec-a13db1b25da2\" (UID: \"ee74e7b2-a80e-4390-afec-a13db1b25da2\") " Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.870221 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.870543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.873561 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp" (OuterVolumeSpecName: "kube-api-access-qwhbp") pod "ee74e7b2-a80e-4390-afec-a13db1b25da2" (UID: "ee74e7b2-a80e-4390-afec-a13db1b25da2"). InnerVolumeSpecName "kube-api-access-qwhbp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.972796 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.972992 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.973014 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: E0128 18:35:37.973071 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:38.973053557 +0000 UTC m=+1349.799616378 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.973092 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qwhbp\" (UniqueName: \"kubernetes.io/projected/ee74e7b2-a80e-4390-afec-a13db1b25da2-kube-api-access-qwhbp\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:37 crc kubenswrapper[4985]: I0128 18:35:37.973108 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ee74e7b2-a80e-4390-afec-a13db1b25da2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.015704 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" event={"ID":"1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c","Type":"ContainerDied","Data":"3c5466552d205ed11bf957206c330067f0b5fafb2460f8946f1184b0e9c10d6b"} Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.015747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-ndmmr" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.017098 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" event={"ID":"ee74e7b2-a80e-4390-afec-a13db1b25da2","Type":"ContainerDied","Data":"31619f9163f0c27ee787dc3b6d91d67625b016d70dc4088ba8f6f0161f7d8376"} Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.017126 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2ltmw" Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.118310 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.132757 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2ltmw"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.159631 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.186035 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-ndmmr"] Jan 28 18:35:38 crc kubenswrapper[4985]: I0128 18:35:38.993948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.994957 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.994983 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:38 crc kubenswrapper[4985]: E0128 18:35:38.995039 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:40.99502172 +0000 UTC m=+1351.821584541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.278434 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c" path="/var/lib/kubelet/pods/1bd09ad3-e6d8-4ee9-b465-139f6de0ae5c/volumes" Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.279487 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ee74e7b2-a80e-4390-afec-a13db1b25da2" path="/var/lib/kubelet/pods/ee74e7b2-a80e-4390-afec-a13db1b25da2/volumes" Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.473392 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:35:39 crc kubenswrapper[4985]: W0128 18:35:39.481380 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfa80be1e_734c_44bc_a957_137332ecd58a.slice/crio-d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87 WatchSource:0}: Error finding container d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87: Status 404 returned error can't find the container with id d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87 Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.484091 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.494480 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-vsdt5"] Jan 28 18:35:39 crc kubenswrapper[4985]: I0128 18:35:39.726715 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:39 crc kubenswrapper[4985]: W0128 18:35:39.726906 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddadb283d_7f9f_414c_9017_f8c0875878ad.slice/crio-8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762 WatchSource:0}: Error finding container 8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762: Status 404 returned error can't find the container with id 8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762 Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.059023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" event={"ID":"c9b84394-02f1-4bde-befe-a2a649925c93","Type":"ContainerStarted","Data":"10ed3a239138cda36178fa97f77027b6bb27361007e7a5dfba71518cc70cc9e7"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.060834 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vsdt5" event={"ID":"d67712df-b1fe-463f-9a6c-c0591aa6cec2","Type":"ContainerStarted","Data":"ce62da9ab4ad5ebe9ac484655e095e764a13892f2927ef24b033182c66dbaa4e"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.062980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"530a57d4fcc58a7444990734dca2f387a5beaeeefa1e7184ab5c1cd39f839253"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.064336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerStarted","Data":"d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.066956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.068972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t" event={"ID":"2d1c1ab5-7e43-47cd-8218-3d945574a79c","Type":"ContainerStarted","Data":"476a165e5ac1277d2ba38cef9c019671f5007fa52413c290f1e43a7139b37662"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.069067 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-9r84t" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.070670 4985 generic.go:334] "Generic (PLEG): container finished" podID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerID="b3532c01bd8307d25c0ad6b941e217b75cf8f836e9ddc2623bf3d7cfac146df1" exitCode=0 Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.070715 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerDied","Data":"b3532c01bd8307d25c0ad6b941e217b75cf8f836e9ddc2623bf3d7cfac146df1"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.072407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"cdb7a2c935be73f6614fdc0b3e030d51920f96308f271b19791dab132d08302b"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.073401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerStarted","Data":"675439af974dddbf47cd9e99f2088bc55d3793ed853e1f96188d1c6dfc1f7742"} Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.091749 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-ui-dashboards-66cbf594b5-5w5dn" podStartSLOduration=20.495170079 podStartE2EDuration="35.091727283s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.351125649 +0000 UTC m=+1335.177688470" lastFinishedPulling="2026-01-28 18:35:38.947682853 +0000 UTC m=+1349.774245674" observedRunningTime="2026-01-28 18:35:40.076784761 +0000 UTC m=+1350.903347582" watchObservedRunningTime="2026-01-28 18:35:40.091727283 +0000 UTC m=+1350.918290104" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.147691 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9r84t" podStartSLOduration=18.80090625 podStartE2EDuration="33.147665532s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.583012026 +0000 UTC m=+1335.409574847" lastFinishedPulling="2026-01-28 18:35:38.929771308 +0000 UTC m=+1349.756334129" observedRunningTime="2026-01-28 18:35:40.138813212 +0000 UTC m=+1350.965376043" watchObservedRunningTime="2026-01-28 18:35:40.147665532 +0000 UTC m=+1350.974228353" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.383810 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.390056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.393899 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.394002 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.394134 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.406840 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447367 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447428 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.447846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.448164 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.473541 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.475486 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.489328 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.507584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550140 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550203 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550329 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550350 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550427 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550460 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550496 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.550576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.555539 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.557147 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.558358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.558434 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.561535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.562893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.578816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"swift-ring-rebalance-6lq9x\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652490 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652607 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652646 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.652811 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.656188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.656373 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.657975 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.658460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.661956 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.662281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.733118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"swift-ring-rebalance-l4q82\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.846667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:40 crc kubenswrapper[4985]: I0128 18:35:40.863156 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.064590 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065295 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065317 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: E0128 18:35:41.065384 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:45.065362151 +0000 UTC m=+1355.891924972 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.098833 4985 generic.go:334] "Generic (PLEG): container finished" podID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerID="a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.098928 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerDied","Data":"a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.102551 4985 generic.go:334] "Generic (PLEG): container finished" podID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerID="48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.102651 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerDied","Data":"48b9afd0e8ea6f4d4858d6f84a49b2f7c97a3a8f124cd52fc3574f7899a262df"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.128328 4985 generic.go:334] "Generic (PLEG): container finished" podID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerID="3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.128412 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerDied","Data":"3dc2fb534ca52f8faf7f4cde3f2dda84c2df48066734fe6ac9c5b40591a7af86"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.146210 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa80be1e-734c-44bc-a957-137332ecd58a" containerID="b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589" exitCode=0 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.147586 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.175454 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d"} Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.187709 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.188059 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.442245 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-l4q82"] Jan 28 18:35:41 crc kubenswrapper[4985]: W0128 18:35:41.457811 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod75109476_5e36_45b8_afb9_1e7f3a9331f9.slice/crio-c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314 WatchSource:0}: Error finding container c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314: Status 404 returned error can't find the container with id c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314 Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.668748 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.777934 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.903960 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.904081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") pod \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\" (UID: \"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3\") " Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.911430 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8" (OuterVolumeSpecName: "kube-api-access-rn2k8") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "kube-api-access-rn2k8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:41 crc kubenswrapper[4985]: I0128 18:35:41.982554 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.000427 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.001196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config" (OuterVolumeSpecName: "config") pod "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" (UID: "a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006528 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006569 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rn2k8\" (UniqueName: \"kubernetes.io/projected/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-kube-api-access-rn2k8\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006582 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.006590 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.199692 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"915b604eb65ad128607175fc36fd28a21541e6d64dcf795a8773b255c6feb3c7"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.199757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-f287q" event={"ID":"2c181f14-26b7-49f4-9ae0-869d9b291938","Type":"ContainerStarted","Data":"ceb50d163fa3519c9657532c007f0ca735c8deae4820e378cf9b4069247a0b84"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.200294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.200325 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.202735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerStarted","Data":"c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.211513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.218788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerStarted","Data":"7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.218980 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.222974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerStarted","Data":"8984873f7fbeb5534245e789d9a64682aba9641126cebac96c088a070c8c95bb"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234771 4985 generic.go:334] "Generic (PLEG): container finished" podID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerID="68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d" exitCode=0 Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234845 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.234875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerStarted","Data":"4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.236777 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.237916 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-f287q" podStartSLOduration=21.340028475 podStartE2EDuration="35.237873313s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.004694911 +0000 UTC m=+1335.831257732" lastFinishedPulling="2026-01-28 18:35:38.902539749 +0000 UTC m=+1349.729102570" observedRunningTime="2026-01-28 18:35:42.224381722 +0000 UTC m=+1353.050944553" watchObservedRunningTime="2026-01-28 18:35:42.237873313 +0000 UTC m=+1353.064436134" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" event={"ID":"a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3","Type":"ContainerDied","Data":"675439af974dddbf47cd9e99f2088bc55d3793ed853e1f96188d1c6dfc1f7742"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239862 4985 scope.go:117] "RemoveContainer" containerID="a6103d5721e8d5e8d69b116fa910ec638e1c66737a310fcba779b01a88563be1" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.239993 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bc7876d45-kf7j5" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.261835 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.264901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podStartSLOduration=6.799873137 podStartE2EDuration="7.264875035s" podCreationTimestamp="2026-01-28 18:35:35 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.484565211 +0000 UTC m=+1350.311128032" lastFinishedPulling="2026-01-28 18:35:39.949567069 +0000 UTC m=+1350.776129930" observedRunningTime="2026-01-28 18:35:42.251633982 +0000 UTC m=+1353.078196813" watchObservedRunningTime="2026-01-28 18:35:42.264875035 +0000 UTC m=+1353.091437856" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.279096 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=23.362210927 podStartE2EDuration="43.279062046s" podCreationTimestamp="2026-01-28 18:34:59 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.017557735 +0000 UTC m=+1314.844120566" lastFinishedPulling="2026-01-28 18:35:23.934408864 +0000 UTC m=+1334.760971685" observedRunningTime="2026-01-28 18:35:42.275641989 +0000 UTC m=+1353.102204810" watchObservedRunningTime="2026-01-28 18:35:42.279062046 +0000 UTC m=+1353.105624867" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.322553 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-sbd6h" podStartSLOduration=9.748896338 podStartE2EDuration="10.322528593s" podCreationTimestamp="2026-01-28 18:35:32 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.729754163 +0000 UTC m=+1350.556316984" lastFinishedPulling="2026-01-28 18:35:40.303386418 +0000 UTC m=+1351.129949239" observedRunningTime="2026-01-28 18:35:42.304314759 +0000 UTC m=+1353.130877590" watchObservedRunningTime="2026-01-28 18:35:42.322528593 +0000 UTC m=+1353.149091414" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.360054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=22.166123711 podStartE2EDuration="42.360030042s" podCreationTimestamp="2026-01-28 18:35:00 +0000 UTC" firstStartedPulling="2026-01-28 18:35:04.331276972 +0000 UTC m=+1315.157839793" lastFinishedPulling="2026-01-28 18:35:24.525183303 +0000 UTC m=+1335.351746124" observedRunningTime="2026-01-28 18:35:42.329796008 +0000 UTC m=+1353.156358829" watchObservedRunningTime="2026-01-28 18:35:42.360030042 +0000 UTC m=+1353.186592863" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.371606 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.380371 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bc7876d45-kf7j5"] Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.583434 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:42 crc kubenswrapper[4985]: I0128 18:35:42.583490 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:43 crc kubenswrapper[4985]: I0128 18:35:43.279103 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" path="/var/lib/kubelet/pods/a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3/volumes" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.279592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-vsdt5" event={"ID":"d67712df-b1fe-463f-9a6c-c0591aa6cec2","Type":"ContainerStarted","Data":"95c3e5aa1cefcadf132fa9c16f2ebce0b4609c97428c17b58c9b0666940e9a66"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.285771 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"76ff3fb3-d9e1-41dc-a644-8ac29cb97d11","Type":"ContainerStarted","Data":"90271bf8a8a83b77da89912a0b1e37403508523bddff9f8d403b25844dea1383"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.289729 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"6e1c7625-25e1-442f-9f71-5d2a9323306c","Type":"ContainerStarted","Data":"d4f8e68010b80f72bdfffb75c6fd4d5190736525ed76f427c0d1e127e9609bcc"} Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.308584 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-vsdt5" podStartSLOduration=8.899183517 podStartE2EDuration="13.308561794s" podCreationTimestamp="2026-01-28 18:35:31 +0000 UTC" firstStartedPulling="2026-01-28 18:35:39.486850716 +0000 UTC m=+1350.313413537" lastFinishedPulling="2026-01-28 18:35:43.896228993 +0000 UTC m=+1354.722791814" observedRunningTime="2026-01-28 18:35:44.300163427 +0000 UTC m=+1355.126726268" watchObservedRunningTime="2026-01-28 18:35:44.308561794 +0000 UTC m=+1355.135124625" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.341913 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=15.537851043 podStartE2EDuration="34.341890685s" podCreationTimestamp="2026-01-28 18:35:10 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.015716852 +0000 UTC m=+1335.842279673" lastFinishedPulling="2026-01-28 18:35:43.819756494 +0000 UTC m=+1354.646319315" observedRunningTime="2026-01-28 18:35:44.332016856 +0000 UTC m=+1355.158579697" watchObservedRunningTime="2026-01-28 18:35:44.341890685 +0000 UTC m=+1355.168453506" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.369432 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=17.84456502 podStartE2EDuration="37.369408702s" podCreationTimestamp="2026-01-28 18:35:07 +0000 UTC" firstStartedPulling="2026-01-28 18:35:24.349221005 +0000 UTC m=+1335.175783836" lastFinishedPulling="2026-01-28 18:35:43.874064707 +0000 UTC m=+1354.700627518" observedRunningTime="2026-01-28 18:35:44.347518924 +0000 UTC m=+1355.174081745" watchObservedRunningTime="2026-01-28 18:35:44.369408702 +0000 UTC m=+1355.195971513" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.818966 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:44 crc kubenswrapper[4985]: I0128 18:35:44.862535 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.067476 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.087217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087416 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087432 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.087483 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:35:53.087469603 +0000 UTC m=+1363.914032434 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.116025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.307646 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.307681 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.360335 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.372060 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.781196 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:45 crc kubenswrapper[4985]: E0128 18:35:45.781972 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.781998 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.782558 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a07568fd-d7d5-48f2-a7ac-3659c5a4a9d3" containerName="init" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.785328 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.795603 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.799989 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.800282 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-7rqdh" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.800433 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.811854 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913707 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913795 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913837 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:45 crc kubenswrapper[4985]: I0128 18:35:45.913880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.015932 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.015996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016086 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016127 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.016691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/76a14385-7b25-48b8-8614-1a77892a1119-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.017391 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-scripts\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.017439 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/76a14385-7b25-48b8-8614-1a77892a1119-config\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.024225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.027369 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.040244 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-znshz\" (UniqueName: \"kubernetes.io/projected/76a14385-7b25-48b8-8614-1a77892a1119-kube-api-access-znshz\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.045058 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/76a14385-7b25-48b8-8614-1a77892a1119-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"76a14385-7b25-48b8-8614-1a77892a1119\") " pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.130727 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.321767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7"} Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.882048 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:46 crc kubenswrapper[4985]: I0128 18:35:46.982905 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 18:35:47 crc kubenswrapper[4985]: I0128 18:35:47.720784 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:47 crc kubenswrapper[4985]: W0128 18:35:47.826808 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76a14385_7b25_48b8_8614_1a77892a1119.slice/crio-4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5 WatchSource:0}: Error finding container 4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5: Status 404 returned error can't find the container with id 4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5 Jan 28 18:35:47 crc kubenswrapper[4985]: I0128 18:35:47.837970 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.351897 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"4ff942b196a891386363ab9cf92d0621b9bee9bd1a17f13ee4166170c805f2c5"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.359406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerStarted","Data":"d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.363087 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerStarted","Data":"00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc"} Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.363225 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-ring-rebalance-6lq9x" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" containerID="cri-o://00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" gracePeriod=30 Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.388871 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-l4q82" podStartSLOduration=2.483675252 podStartE2EDuration="8.388845709s" podCreationTimestamp="2026-01-28 18:35:40 +0000 UTC" firstStartedPulling="2026-01-28 18:35:41.46490938 +0000 UTC m=+1352.291472201" lastFinishedPulling="2026-01-28 18:35:47.370079837 +0000 UTC m=+1358.196642658" observedRunningTime="2026-01-28 18:35:48.387846591 +0000 UTC m=+1359.214409432" watchObservedRunningTime="2026-01-28 18:35:48.388845709 +0000 UTC m=+1359.215408550" Jan 28 18:35:48 crc kubenswrapper[4985]: I0128 18:35:48.420023 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-6lq9x" podStartSLOduration=2.7201037059999997 podStartE2EDuration="8.420001448s" podCreationTimestamp="2026-01-28 18:35:40 +0000 UTC" firstStartedPulling="2026-01-28 18:35:41.680124986 +0000 UTC m=+1352.506687807" lastFinishedPulling="2026-01-28 18:35:47.380022728 +0000 UTC m=+1358.206585549" observedRunningTime="2026-01-28 18:35:48.405731706 +0000 UTC m=+1359.232294527" watchObservedRunningTime="2026-01-28 18:35:48.420001448 +0000 UTC m=+1359.246564269" Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.539077 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.605746 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:50 crc kubenswrapper[4985]: I0128 18:35:50.606047 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-sbd6h" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" containerID="cri-o://4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" gracePeriod=10 Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.077435 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.077847 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.220842 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.223129 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.226882 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.231846 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.383406 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.383464 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.404968 4985 generic.go:334] "Generic (PLEG): container finished" podID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerID="4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" exitCode=0 Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.405016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce"} Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.485937 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.486029 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.487116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.524891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"root-account-create-update-fm4x7\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.555008 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.764040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 18:35:51 crc kubenswrapper[4985]: I0128 18:35:51.944495 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.184186 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.185662 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.198764 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.255909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.274361 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.275921 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.280638 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.286497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.300630 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.332304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.332653 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.428406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerStarted","Data":"8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c"} Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442027 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-sbd6h" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-sbd6h" event={"ID":"dadb283d-7f9f-414c-9017-f8c0875878ad","Type":"ContainerDied","Data":"8651fb5de970f4dd3ff0cc87b132ffe1891fcfecc007311983832fbce5848762"} Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.442107 4985 scope.go:117] "RemoveContainer" containerID="4fbdfdf2644365e56621c8dd65f4dc2403575c997b33777a83fc07aed15bfdce" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447203 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.447726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") pod \"dadb283d-7f9f-414c-9017-f8c0875878ad\" (UID: \"dadb283d-7f9f-414c-9017-f8c0875878ad\") " Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.450806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.450931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.451038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.451242 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.453463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp" (OuterVolumeSpecName: "kube-api-access-mcghp") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "kube-api-access-mcghp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.454520 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.458712 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mcghp\" (UniqueName: \"kubernetes.io/projected/dadb283d-7f9f-414c-9017-f8c0875878ad-kube-api-access-mcghp\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.468460 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:52 crc kubenswrapper[4985]: E0128 18:35:52.468993 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="init" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469018 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="init" Jan 28 18:35:52 crc kubenswrapper[4985]: E0128 18:35:52.469039 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469047 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.469346 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" containerName="dnsmasq-dns" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.470134 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.474057 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"keystone-db-create-ksczb\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.478467 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.480078 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-64878fb8f-ljltp" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" containerID="cri-o://c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" gracePeriod=15 Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.480307 4985 scope.go:117] "RemoveContainer" containerID="68193873dff4bd6a35834f28702dce0fa7f1463ec5af6dd5571aab6e1aa60d3d" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.528058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.560508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.560636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.564373 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.564847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.573906 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.575446 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.578529 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.579371 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config" (OuterVolumeSpecName: "config") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.579793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.594121 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "dadb283d-7f9f-414c-9017-f8c0875878ad" (UID: "dadb283d-7f9f-414c-9017-f8c0875878ad"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.595807 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"keystone-1abf-account-create-update-fwwhm\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.608972 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.647525 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.654357 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669217 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669373 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669560 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669574 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.669588 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dadb283d-7f9f-414c-9017-f8c0875878ad-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.772762 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.772830 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773234 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773552 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.773672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.806084 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"placement-db-create-9qd5p\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.851015 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.857672 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.863497 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.878322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.878447 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.880758 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.898019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"placement-3e6a-account-create-update-ktg62\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.906476 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.985142 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.985377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.996710 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:52 crc kubenswrapper[4985]: I0128 18:35:52.998174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:52.999945 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.006130 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.086903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087089 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087134 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087173 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.087973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.114548 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"glance-db-create-z2jgs\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.116758 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.189886 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.189943 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.190016 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190107 4985 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190136 4985 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.190194 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift podName:4b55b35c-0ef1-4db8-b435-24de7fda8ecc nodeName:}" failed. No retries permitted until 2026-01-28 18:36:09.190177102 +0000 UTC m=+1380.016739923 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift") pod "swift-storage-0" (UID: "4b55b35c-0ef1-4db8-b435-24de7fda8ecc") : configmap "swift-ring-files" not found Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.191433 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.212043 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"glance-7fd1-account-create-update-tlhk7\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.280265 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.288296 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.288339 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-sbd6h"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.294474 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.309585 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64878fb8f-ljltp_0d2b3a75-cb2e-41a2-9005-a72a8aebb818/console/0.log" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.309651 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.402999 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403409 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.403450 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") pod \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\" (UID: \"0d2b3a75-cb2e-41a2-9005-a72a8aebb818\") " Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.407126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.407617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.408345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config" (OuterVolumeSpecName: "console-config") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.410804 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca" (OuterVolumeSpecName: "service-ca") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.413176 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.416358 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67" (OuterVolumeSpecName: "kube-api-access-cpv67") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "kube-api-access-cpv67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.426611 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "0d2b3a75-cb2e-41a2-9005-a72a8aebb818" (UID: "0d2b3a75-cb2e-41a2-9005-a72a8aebb818"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.475942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.483851 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"6857e6477c043d09d8a7adde771c8aa2d521d7a625e2cbad40fe527cba92acba"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.483885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"76a14385-7b25-48b8-8614-1a77892a1119","Type":"ContainerStarted","Data":"09facf0b5f7f7b955017702e5f0cca1614271f1db9b3f6b6134d147566e4189f"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.484524 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.491463 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508465 4985 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508505 4985 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508521 4985 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508533 4985 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508553 4985 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508564 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpv67\" (UniqueName: \"kubernetes.io/projected/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-kube-api-access-cpv67\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.508575 4985 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/0d2b3a75-cb2e-41a2-9005-a72a8aebb818-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.514797 4985 generic.go:334] "Generic (PLEG): container finished" podID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerID="e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6" exitCode=0 Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.514874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerDied","Data":"e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.522519 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=4.389896532 podStartE2EDuration="8.522504164s" podCreationTimestamp="2026-01-28 18:35:45 +0000 UTC" firstStartedPulling="2026-01-28 18:35:47.830441825 +0000 UTC m=+1358.657004646" lastFinishedPulling="2026-01-28 18:35:51.963049457 +0000 UTC m=+1362.789612278" observedRunningTime="2026-01-28 18:35:53.511934076 +0000 UTC m=+1364.338496897" watchObservedRunningTime="2026-01-28 18:35:53.522504164 +0000 UTC m=+1364.349066985" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546403 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-64878fb8f-ljltp_0d2b3a75-cb2e-41a2-9005-a72a8aebb818/console/0.log" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546449 4985 generic.go:334] "Generic (PLEG): container finished" podID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" exitCode=2 Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerDied","Data":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64878fb8f-ljltp" event={"ID":"0d2b3a75-cb2e-41a2-9005-a72a8aebb818","Type":"ContainerDied","Data":"5a102b8490fbf118bf29ead080a5a651f553a5218e77ce9190605ec1fabffe5e"} Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546521 4985 scope.go:117] "RemoveContainer" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.546600 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64878fb8f-ljltp" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.617003 4985 scope.go:117] "RemoveContainer" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: E0128 18:35:53.621032 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": container with ID starting with c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8 not found: ID does not exist" containerID="c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.621094 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8"} err="failed to get container status \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": rpc error: code = NotFound desc = could not find container \"c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8\": container with ID starting with c469580e6e826c4c97b551da91e215015bea11f181f7f197c8807e25ea31bef8 not found: ID does not exist" Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.623456 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:53 crc kubenswrapper[4985]: I0128 18:35:53.636117 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-64878fb8f-ljltp"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.169907 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.180910 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.191538 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.199665 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.561471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerStarted","Data":"b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.561512 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerStarted","Data":"6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.565860 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.565962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.570374 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerStarted","Data":"521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.570423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerStarted","Data":"bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576089 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerStarted","Data":"7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576148 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerStarted","Data":"2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.576709 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-z2jgs" podStartSLOduration=2.5766934470000002 podStartE2EDuration="2.576693447s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.574168775 +0000 UTC m=+1365.400731596" watchObservedRunningTime="2026-01-28 18:35:54.576693447 +0000 UTC m=+1365.403256268" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581109 4985 generic.go:334] "Generic (PLEG): container finished" podID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerID="3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerDied","Data":"3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.581213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerStarted","Data":"f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.583200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerStarted","Data":"609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.583237 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerStarted","Data":"189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585500 4985 generic.go:334] "Generic (PLEG): container finished" podID="9900c5fe-8fec-452e-86cc-98d901c94329" containerID="a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a" exitCode=0 Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585636 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerDied","Data":"a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.585714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerStarted","Data":"27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910"} Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.597819 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-3e6a-account-create-update-ktg62" podStartSLOduration=2.597799243 podStartE2EDuration="2.597799243s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.597052672 +0000 UTC m=+1365.423615493" watchObservedRunningTime="2026-01-28 18:35:54.597799243 +0000 UTC m=+1365.424362064" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.642188 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-7fd1-account-create-update-tlhk7" podStartSLOduration=2.642169545 podStartE2EDuration="2.642169545s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.634631183 +0000 UTC m=+1365.461194004" watchObservedRunningTime="2026-01-28 18:35:54.642169545 +0000 UTC m=+1365.468732356" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.725786 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-9qd5p" podStartSLOduration=2.725766035 podStartE2EDuration="2.725766035s" podCreationTimestamp="2026-01-28 18:35:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:54.682620497 +0000 UTC m=+1365.509183318" watchObservedRunningTime="2026-01-28 18:35:54.725766035 +0000 UTC m=+1365.552328856" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.864590 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:54 crc kubenswrapper[4985]: E0128 18:35:54.865314 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.865341 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.865602 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" containerName="console" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.866649 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:54 crc kubenswrapper[4985]: I0128 18:35:54.892002 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.054225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.054388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.071589 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.072951 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.074649 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-db-secret" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.084121 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.097328 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.156686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.156792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.157568 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.187941 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"mysqld-exporter-openstack-db-create-kwqd8\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.259145 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") pod \"12f068aa-ed0a-47e7-9f95-16f86bf91343\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.259467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") pod \"12f068aa-ed0a-47e7-9f95-16f86bf91343\" (UID: \"12f068aa-ed0a-47e7-9f95-16f86bf91343\") " Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260115 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.260450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12f068aa-ed0a-47e7-9f95-16f86bf91343" (UID: "12f068aa-ed0a-47e7-9f95-16f86bf91343"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.263809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj" (OuterVolumeSpecName: "kube-api-access-6lshj") pod "12f068aa-ed0a-47e7-9f95-16f86bf91343" (UID: "12f068aa-ed0a-47e7-9f95-16f86bf91343"). InnerVolumeSpecName "kube-api-access-6lshj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.280139 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d2b3a75-cb2e-41a2-9005-a72a8aebb818" path="/var/lib/kubelet/pods/0d2b3a75-cb2e-41a2-9005-a72a8aebb818/volumes" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.281542 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dadb283d-7f9f-414c-9017-f8c0875878ad" path="/var/lib/kubelet/pods/dadb283d-7f9f-414c-9017-f8c0875878ad/volumes" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.362769 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.363234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.364566 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6lshj\" (UniqueName: \"kubernetes.io/projected/12f068aa-ed0a-47e7-9f95-16f86bf91343-kube-api-access-6lshj\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.364599 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12f068aa-ed0a-47e7-9f95-16f86bf91343-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.386458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.392121 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.405615 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"mysqld-exporter-53b2-account-create-update-qhkg4\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.410370 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.603900 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerID="609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.604461 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerDied","Data":"609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.615153 4985 generic.go:334] "Generic (PLEG): container finished" podID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerID="b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.615218 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerDied","Data":"b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-fm4x7" event={"ID":"12f068aa-ed0a-47e7-9f95-16f86bf91343","Type":"ContainerDied","Data":"8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619349 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8bd64f391002afc6ed3d23bed80d044acc414be4bab0351a66dfcef4e0f3f74c" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.619446 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-fm4x7" Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.626892 4985 generic.go:334] "Generic (PLEG): container finished" podID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerID="521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.627132 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerDied","Data":"521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.634691 4985 generic.go:334] "Generic (PLEG): container finished" podID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerID="7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea" exitCode=0 Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.634756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerDied","Data":"7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea"} Jan 28 18:35:55 crc kubenswrapper[4985]: I0128 18:35:55.872811 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.228423 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.383610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.389974 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494177 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") pod \"e6004532-b8ab-4b69-9907-e7bd26c6735a\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494316 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") pod \"9900c5fe-8fec-452e-86cc-98d901c94329\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") pod \"9900c5fe-8fec-452e-86cc-98d901c94329\" (UID: \"9900c5fe-8fec-452e-86cc-98d901c94329\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.494683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") pod \"e6004532-b8ab-4b69-9907-e7bd26c6735a\" (UID: \"e6004532-b8ab-4b69-9907-e7bd26c6735a\") " Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.497234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9900c5fe-8fec-452e-86cc-98d901c94329" (UID: "9900c5fe-8fec-452e-86cc-98d901c94329"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.497910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e6004532-b8ab-4b69-9907-e7bd26c6735a" (UID: "e6004532-b8ab-4b69-9907-e7bd26c6735a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.517524 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc" (OuterVolumeSpecName: "kube-api-access-7rwlc") pod "e6004532-b8ab-4b69-9907-e7bd26c6735a" (UID: "e6004532-b8ab-4b69-9907-e7bd26c6735a"). InnerVolumeSpecName "kube-api-access-7rwlc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.517613 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5" (OuterVolumeSpecName: "kube-api-access-jncg5") pod "9900c5fe-8fec-452e-86cc-98d901c94329" (UID: "9900c5fe-8fec-452e-86cc-98d901c94329"). InnerVolumeSpecName "kube-api-access-jncg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.599939 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rwlc\" (UniqueName: \"kubernetes.io/projected/e6004532-b8ab-4b69-9907-e7bd26c6735a-kube-api-access-7rwlc\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.599986 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jncg5\" (UniqueName: \"kubernetes.io/projected/9900c5fe-8fec-452e-86cc-98d901c94329-kube-api-access-jncg5\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.600000 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9900c5fe-8fec-452e-86cc-98d901c94329-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.600012 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e6004532-b8ab-4b69-9907-e7bd26c6735a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.670907 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-1abf-account-create-update-fwwhm" event={"ID":"e6004532-b8ab-4b69-9907-e7bd26c6735a","Type":"ContainerDied","Data":"f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.670969 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f24ff43e9c1efa3a7fc1289bc1ab6b77ffa3e1a45be1121c6dcc1ee3c4ef0fb9" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.671072 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-1abf-account-create-update-fwwhm" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682633 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-ksczb" event={"ID":"9900c5fe-8fec-452e-86cc-98d901c94329","Type":"ContainerDied","Data":"27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682674 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27094ed44a1a823e00c87afc7c6b6780c4e13b4f03410388f06fe7b875da5910" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.682741 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-ksczb" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.686372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerStarted","Data":"cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.686607 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerStarted","Data":"9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.695473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerStarted","Data":"dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.695528 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerStarted","Data":"bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33"} Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.713325 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" podStartSLOduration=1.713294688 podStartE2EDuration="1.713294688s" podCreationTimestamp="2026-01-28 18:35:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:56.705392405 +0000 UTC m=+1367.531955226" watchObservedRunningTime="2026-01-28 18:35:56.713294688 +0000 UTC m=+1367.539857519" Jan 28 18:35:56 crc kubenswrapper[4985]: I0128 18:35:56.749737 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" podStartSLOduration=2.749704716 podStartE2EDuration="2.749704716s" podCreationTimestamp="2026-01-28 18:35:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:35:56.735536776 +0000 UTC m=+1367.562099597" watchObservedRunningTime="2026-01-28 18:35:56.749704716 +0000 UTC m=+1367.576267527" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.197267 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.288049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") pod \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.288953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") pod \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\" (UID: \"8c2755f3-fac4-4f0b-9afb-a449f1587d11\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.289916 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c2755f3-fac4-4f0b-9afb-a449f1587d11" (UID: "8c2755f3-fac4-4f0b-9afb-a449f1587d11"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.294807 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c2755f3-fac4-4f0b-9afb-a449f1587d11-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.319763 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7" (OuterVolumeSpecName: "kube-api-access-797f7") pod "8c2755f3-fac4-4f0b-9afb-a449f1587d11" (UID: "8c2755f3-fac4-4f0b-9afb-a449f1587d11"). InnerVolumeSpecName "kube-api-access-797f7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.396349 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-797f7\" (UniqueName: \"kubernetes.io/projected/8c2755f3-fac4-4f0b-9afb-a449f1587d11-kube-api-access-797f7\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.608274 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.618021 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.631346 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.704603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") pod \"1a24a5c2-4c45-43dd-a957-253323fed4d5\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705032 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1a24a5c2-4c45-43dd-a957-253323fed4d5" (UID: "1a24a5c2-4c45-43dd-a957-253323fed4d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705109 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") pod \"1a24a5c2-4c45-43dd-a957-253323fed4d5\" (UID: \"1a24a5c2-4c45-43dd-a957-253323fed4d5\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705613 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") pod \"4adf60c6-4008-4f41-a60b-cf10db1657cf\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705691 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") pod \"346cb311-0387-4c85-9827-e0091b1e6bcd\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705719 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") pod \"346cb311-0387-4c85-9827-e0091b1e6bcd\" (UID: \"346cb311-0387-4c85-9827-e0091b1e6bcd\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.705762 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") pod \"4adf60c6-4008-4f41-a60b-cf10db1657cf\" (UID: \"4adf60c6-4008-4f41-a60b-cf10db1657cf\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.706381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "346cb311-0387-4c85-9827-e0091b1e6bcd" (UID: "346cb311-0387-4c85-9827-e0091b1e6bcd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.706784 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4adf60c6-4008-4f41-a60b-cf10db1657cf" (UID: "4adf60c6-4008-4f41-a60b-cf10db1657cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707062 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1a24a5c2-4c45-43dd-a957-253323fed4d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707085 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/346cb311-0387-4c85-9827-e0091b1e6bcd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.707095 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4adf60c6-4008-4f41-a60b-cf10db1657cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-z2jgs" event={"ID":"1a24a5c2-4c45-43dd-a957-253323fed4d5","Type":"ContainerDied","Data":"6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708712 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d9b1c199f1062535f568d8f45dde873fe42b5b81b0f1392ff76e0211f842360" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708771 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-z2jgs" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.708861 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4" (OuterVolumeSpecName: "kube-api-access-ljjz4") pod "4adf60c6-4008-4f41-a60b-cf10db1657cf" (UID: "4adf60c6-4008-4f41-a60b-cf10db1657cf"). InnerVolumeSpecName "kube-api-access-ljjz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.709779 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb" (OuterVolumeSpecName: "kube-api-access-2s5bb") pod "346cb311-0387-4c85-9827-e0091b1e6bcd" (UID: "346cb311-0387-4c85-9827-e0091b1e6bcd"). InnerVolumeSpecName "kube-api-access-2s5bb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.709819 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz" (OuterVolumeSpecName: "kube-api-access-7cbkz") pod "1a24a5c2-4c45-43dd-a957-253323fed4d5" (UID: "1a24a5c2-4c45-43dd-a957-253323fed4d5"). InnerVolumeSpecName "kube-api-access-7cbkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712013 4985 generic.go:334] "Generic (PLEG): container finished" podID="c0714595-ac9e-4945-9250-6f499317070d" containerID="00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerDied","Data":"00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.712360 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722583 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-3e6a-account-create-update-ktg62" event={"ID":"346cb311-0387-4c85-9827-e0091b1e6bcd","Type":"ContainerDied","Data":"bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722624 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb09edc01a4c3afb4449a4dacb7ab86a9a7a6e0d155a46be22553034c547ae03" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.722671 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-3e6a-account-create-update-ktg62" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-7fd1-account-create-update-tlhk7" event={"ID":"4adf60c6-4008-4f41-a60b-cf10db1657cf","Type":"ContainerDied","Data":"2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727196 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b9e72b871ae9726c48909179e5d8e9383458a61e82e6086b4c9d2eaeaa79c60" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.727283 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-7fd1-account-create-update-tlhk7" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.741080 4985 generic.go:334] "Generic (PLEG): container finished" podID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerID="cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.741134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerDied","Data":"cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.745706 4985 generic.go:334] "Generic (PLEG): container finished" podID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerID="dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.745768 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerDied","Data":"dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-9qd5p" event={"ID":"8c2755f3-fac4-4f0b-9afb-a449f1587d11","Type":"ContainerDied","Data":"189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750396 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="189015c56b26a2946bc608b7b573f5ccb4f5e157b8c0ad9b525476261a7b20ac" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.750459 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-9qd5p" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.751766 4985 generic.go:334] "Generic (PLEG): container finished" podID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerID="d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45" exitCode=0 Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.751799 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerDied","Data":"d9984694685d646182db409a296c9eb34220178e5fa3648431bc4bdbe12a9c45"} Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.807844 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808427 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808572 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.808809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.809809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.810055 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") pod \"c0714595-ac9e-4945-9250-6f499317070d\" (UID: \"c0714595-ac9e-4945-9250-6f499317070d\") " Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.810668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811088 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811413 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811505 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cbkz\" (UniqueName: \"kubernetes.io/projected/1a24a5c2-4c45-43dd-a957-253323fed4d5-kube-api-access-7cbkz\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811577 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ljjz4\" (UniqueName: \"kubernetes.io/projected/4adf60c6-4008-4f41-a60b-cf10db1657cf-kube-api-access-ljjz4\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.811651 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s5bb\" (UniqueName: \"kubernetes.io/projected/346cb311-0387-4c85-9827-e0091b1e6bcd-kube-api-access-2s5bb\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.822463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf" (OuterVolumeSpecName: "kube-api-access-9hdhf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "kube-api-access-9hdhf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.824509 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.830599 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts" (OuterVolumeSpecName: "scripts") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.833163 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.839582 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "c0714595-ac9e-4945-9250-6f499317070d" (UID: "c0714595-ac9e-4945-9250-6f499317070d"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916130 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916439 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9hdhf\" (UniqueName: \"kubernetes.io/projected/c0714595-ac9e-4945-9250-6f499317070d-kube-api-access-9hdhf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916527 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/c0714595-ac9e-4945-9250-6f499317070d-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916602 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916676 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c0714595-ac9e-4945-9250-6f499317070d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:57 crc kubenswrapper[4985]: I0128 18:35:57.916766 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c0714595-ac9e-4945-9250-6f499317070d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.766457 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-6lq9x" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.767749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-6lq9x" event={"ID":"c0714595-ac9e-4945-9250-6f499317070d","Type":"ContainerDied","Data":"8984873f7fbeb5534245e789d9a64682aba9641126cebac96c088a070c8c95bb"} Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.767905 4985 scope.go:117] "RemoveContainer" containerID="00ae9927f05102567e126074090c38904675116334ef57365bcf6f128ff9bdcc" Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.768082 4985 generic.go:334] "Generic (PLEG): container finished" podID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" exitCode=0 Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.768200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.906337 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:58 crc kubenswrapper[4985]: I0128 18:35:58.914357 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-6lq9x"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.271575 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.288419 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0714595-ac9e-4945-9250-6f499317070d" path="/var/lib/kubelet/pods/c0714595-ac9e-4945-9250-6f499317070d/volumes" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.349742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") pod \"9193a306-03fe-41ae-8b93-2851b08c73fb\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.350064 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") pod \"9193a306-03fe-41ae-8b93-2851b08c73fb\" (UID: \"9193a306-03fe-41ae-8b93-2851b08c73fb\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.351525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "9193a306-03fe-41ae-8b93-2851b08c73fb" (UID: "9193a306-03fe-41ae-8b93-2851b08c73fb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.360198 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct" (OuterVolumeSpecName: "kube-api-access-8fgct") pod "9193a306-03fe-41ae-8b93-2851b08c73fb" (UID: "9193a306-03fe-41ae-8b93-2851b08c73fb"). InnerVolumeSpecName "kube-api-access-8fgct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.429855 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.437403 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.451954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452334 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452397 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") pod \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452430 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") pod \"75109476-5e36-45b8-afb9-1e7f3a9331f9\" (UID: \"75109476-5e36-45b8-afb9-1e7f3a9331f9\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452654 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") pod \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\" (UID: \"dbefdfab-0ef2-4f71-9e9c-412c4dd87886\") " Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.452833 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453430 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/9193a306-03fe-41ae-8b93-2851b08c73fb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453449 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fgct\" (UniqueName: \"kubernetes.io/projected/9193a306-03fe-41ae-8b93-2851b08c73fb-kube-api-access-8fgct\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.453461 4985 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.483548 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts" (OuterVolumeSpecName: "scripts") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.483977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dbefdfab-0ef2-4f71-9e9c-412c4dd87886" (UID: "dbefdfab-0ef2-4f71-9e9c-412c4dd87886"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.484545 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485317 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps" (OuterVolumeSpecName: "kube-api-access-rbrps") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "kube-api-access-rbrps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.485827 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p" (OuterVolumeSpecName: "kube-api-access-whr5p") pod "dbefdfab-0ef2-4f71-9e9c-412c4dd87886" (UID: "dbefdfab-0ef2-4f71-9e9c-412c4dd87886"). InnerVolumeSpecName "kube-api-access-whr5p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.517503 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.519918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "75109476-5e36-45b8-afb9-1e7f3a9331f9" (UID: "75109476-5e36-45b8-afb9-1e7f3a9331f9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555269 4985 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555296 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555308 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rbrps\" (UniqueName: \"kubernetes.io/projected/75109476-5e36-45b8-afb9-1e7f3a9331f9-kube-api-access-rbrps\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555317 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555326 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/75109476-5e36-45b8-afb9-1e7f3a9331f9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555334 4985 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/75109476-5e36-45b8-afb9-1e7f3a9331f9-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555342 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whr5p\" (UniqueName: \"kubernetes.io/projected/dbefdfab-0ef2-4f71-9e9c-412c4dd87886-kube-api-access-whr5p\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.555351 4985 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/75109476-5e36-45b8-afb9-1e7f3a9331f9-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.645321 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.658832 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-fm4x7"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.736658 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739311 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739356 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739394 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739402 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739419 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739426 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739453 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739459 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739482 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739504 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739511 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739527 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739533 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739547 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739553 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739571 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739576 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739590 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739596 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: E0128 18:35:59.739610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.739618 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740114 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740138 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740156 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75109476-5e36-45b8-afb9-1e7f3a9331f9" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740173 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740181 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740203 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740223 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0714595-ac9e-4945-9250-6f499317070d" containerName="swift-ring-rebalance" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740265 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" containerName="mariadb-account-create-update" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.740295 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" containerName="mariadb-database-create" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.741551 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.747329 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.760133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.760186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.783180 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833390 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833389 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-53b2-account-create-update-qhkg4" event={"ID":"dbefdfab-0ef2-4f71-9e9c-412c4dd87886","Type":"ContainerDied","Data":"9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.833507 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e2efe46034044851f5a3e637e431cf9ea43affccfac6f4e797b1d360ae90de8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.836919 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-l4q82" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.837460 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-l4q82" event={"ID":"75109476-5e36-45b8-afb9-1e7f3a9331f9","Type":"ContainerDied","Data":"c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.837526 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c1416088ef67bc8d80926482d433fdd2be41d91a244a0f52cf43dc4e1bdb2314" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.848279 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerStarted","Data":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.848568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852420 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" event={"ID":"9193a306-03fe-41ae-8b93-2851b08c73fb","Type":"ContainerDied","Data":"bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852465 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbbe3861e112c80337ea958edc9df2015e30e5d8f56b8fda15972e6b8bc59e33" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.852475 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-db-create-kwqd8" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.862966 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.863022 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.864816 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.866541 4985 generic.go:334] "Generic (PLEG): container finished" podID="313d3857-140a-4a66-8329-12453fc8dd4c" containerID="4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7" exitCode=0 Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.866594 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7"} Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.880106 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=38.36363615 podStartE2EDuration="1m1.880083703s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.410925482 +0000 UTC m=+1311.237488303" lastFinishedPulling="2026-01-28 18:35:23.927373035 +0000 UTC m=+1334.753935856" observedRunningTime="2026-01-28 18:35:59.874627749 +0000 UTC m=+1370.701190570" watchObservedRunningTime="2026-01-28 18:35:59.880083703 +0000 UTC m=+1370.706646524" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.882757 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"root-account-create-update-9sg6w\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " pod="openstack/root-account-create-update-9sg6w" Jan 28 18:35:59 crc kubenswrapper[4985]: I0128 18:35:59.923620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.402749 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:36:00 crc kubenswrapper[4985]: W0128 18:36:00.405697 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdbd403f_b5d7_4aba_9ee6_bcbbd933b212.slice/crio-82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00 WatchSource:0}: Error finding container 82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00: Status 404 returned error can't find the container with id 82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.881656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerStarted","Data":"40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.882974 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.887332 4985 generic.go:334] "Generic (PLEG): container finished" podID="9549037f-5867-44ac-86dc-a02105e4c414" containerID="bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.887427 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889755 4985 generic.go:334] "Generic (PLEG): container finished" podID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerID="448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889836 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerDied","Data":"448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.889873 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerStarted","Data":"82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.891650 4985 generic.go:334] "Generic (PLEG): container finished" podID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerID="dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a" exitCode=0 Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.891779 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a"} Jan 28 18:36:00 crc kubenswrapper[4985]: I0128 18:36:00.908097 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=39.053814145 podStartE2EDuration="1m2.908072935s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.668561816 +0000 UTC m=+1311.495124637" lastFinishedPulling="2026-01-28 18:35:24.522820606 +0000 UTC m=+1335.349383427" observedRunningTime="2026-01-28 18:36:00.906682606 +0000 UTC m=+1371.733245447" watchObservedRunningTime="2026-01-28 18:36:00.908072935 +0000 UTC m=+1371.734635756" Jan 28 18:36:01 crc kubenswrapper[4985]: I0128 18:36:01.274943 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f068aa-ed0a-47e7-9f95-16f86bf91343" path="/var/lib/kubelet/pods/12f068aa-ed0a-47e7-9f95-16f86bf91343/volumes" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.228496 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.231701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.235107 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.235306 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jbtcd" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.244916 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.351829 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.351982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.352041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.352284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.454698 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.455312 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.458853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.459985 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.460124 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.472165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"glance-db-sync-5q5qm\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.587804 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.598946 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.658556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") pod \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.659013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") pod \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\" (UID: \"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212\") " Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.661782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" (UID: "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.676660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29" (OuterVolumeSpecName: "kube-api-access-bpg29") pod "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" (UID: "cdbd403f-b5d7-4aba-9ee6-bcbbd933b212"). InnerVolumeSpecName "kube-api-access-bpg29". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.761822 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpg29\" (UniqueName: \"kubernetes.io/projected/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-kube-api-access-bpg29\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.761851 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.920841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerStarted","Data":"1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.921195 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-9sg6w" event={"ID":"cdbd403f-b5d7-4aba-9ee6-bcbbd933b212","Type":"ContainerDied","Data":"82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924334 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82fa02a88ee932db1116b49896d85803a5d7bac9ce45f395758ed51aa02e8c00" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.924462 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-9sg6w" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.926438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerStarted","Data":"aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c"} Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.927296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.956003 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=42.843467075 podStartE2EDuration="1m5.955984535s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.622929407 +0000 UTC m=+1311.449492228" lastFinishedPulling="2026-01-28 18:35:23.735446867 +0000 UTC m=+1334.562009688" observedRunningTime="2026-01-28 18:36:03.949192213 +0000 UTC m=+1374.775755034" watchObservedRunningTime="2026-01-28 18:36:03.955984535 +0000 UTC m=+1374.782547356" Jan 28 18:36:03 crc kubenswrapper[4985]: I0128 18:36:03.985148 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=42.365691017 podStartE2EDuration="1m5.985116617s" podCreationTimestamp="2026-01-28 18:34:58 +0000 UTC" firstStartedPulling="2026-01-28 18:35:00.816661137 +0000 UTC m=+1311.643223948" lastFinishedPulling="2026-01-28 18:35:24.436086717 +0000 UTC m=+1335.262649548" observedRunningTime="2026-01-28 18:36:03.973555621 +0000 UTC m=+1374.800118442" watchObservedRunningTime="2026-01-28 18:36:03.985116617 +0000 UTC m=+1374.811679438" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.299834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:05 crc kubenswrapper[4985]: E0128 18:36:05.300721 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.300740 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.300994 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" containerName="mariadb-account-create-update" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.306463 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.323732 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.414052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.414334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.428775 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.430510 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.433440 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-openstack-cell1-db-secret" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.442360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517800 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.517862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.518128 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.520334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.543145 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"mysqld-exporter-openstack-cell1-db-create-fvvh2\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.619903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.620018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.620665 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.627012 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.644956 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"mysqld-exporter-ba0b-account-create-update-56qr8\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.946768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:05 crc kubenswrapper[4985]: I0128 18:36:05.947300 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.032434 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:36:06 crc kubenswrapper[4985]: W0128 18:36:06.037398 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod229b9159_df89_4859_b5f3_d34b2759d0fd.slice/crio-08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09 WatchSource:0}: Error finding container 08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09: Status 404 returned error can't find the container with id 08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09 Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.193890 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.249058 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.552651 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.960778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerStarted","Data":"1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.961038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerStarted","Data":"56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.962466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerStarted","Data":"08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968782 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerID="b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd" exitCode=0 Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerDied","Data":"b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.968832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerStarted","Data":"671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09"} Jan 28 18:36:06 crc kubenswrapper[4985]: I0128 18:36:06.986107 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" podStartSLOduration=1.986086521 podStartE2EDuration="1.986086521s" podCreationTimestamp="2026-01-28 18:36:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:06.978078535 +0000 UTC m=+1377.804641376" watchObservedRunningTime="2026-01-28 18:36:06.986086521 +0000 UTC m=+1377.812649342" Jan 28 18:36:07 crc kubenswrapper[4985]: I0128 18:36:07.982028 4985 generic.go:334] "Generic (PLEG): container finished" podID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerID="1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734" exitCode=0 Jan 28 18:36:07 crc kubenswrapper[4985]: I0128 18:36:07.982128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerDied","Data":"1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.401182 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.493370 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") pod \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.493883 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8c57cd6d-54d8-4d7c-863c-cfd30fab0768" (UID: "8c57cd6d-54d8-4d7c-863c-cfd30fab0768"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.494313 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") pod \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\" (UID: \"8c57cd6d-54d8-4d7c-863c-cfd30fab0768\") " Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.495873 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.501030 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds" (OuterVolumeSpecName: "kube-api-access-n8qds") pod "8c57cd6d-54d8-4d7c-863c-cfd30fab0768" (UID: "8c57cd6d-54d8-4d7c-863c-cfd30fab0768"). InnerVolumeSpecName "kube-api-access-n8qds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.597596 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8qds\" (UniqueName: \"kubernetes.io/projected/8c57cd6d-54d8-4d7c-863c-cfd30fab0768-kube-api-access-n8qds\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.995673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998550 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" event={"ID":"8c57cd6d-54d8-4d7c-863c-cfd30fab0768","Type":"ContainerDied","Data":"671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09"} Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998576 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="671eee055a071bc8d961556a69fcdf932528e6509edb02442e024c4f35917d09" Jan 28 18:36:08 crc kubenswrapper[4985]: I0128 18:36:08.998583 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.209907 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.215863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/4b55b35c-0ef1-4db8-b435-24de7fda8ecc-etc-swift\") pod \"swift-storage-0\" (UID: \"4b55b35c-0ef1-4db8-b435-24de7fda8ecc\") " pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.411510 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.508957 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.619274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") pod \"53f6fb79-54ff-4a24-ad53-5812b6faa504\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.619475 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") pod \"53f6fb79-54ff-4a24-ad53-5812b6faa504\" (UID: \"53f6fb79-54ff-4a24-ad53-5812b6faa504\") " Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.620078 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "53f6fb79-54ff-4a24-ad53-5812b6faa504" (UID: "53f6fb79-54ff-4a24-ad53-5812b6faa504"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.626666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx" (OuterVolumeSpecName: "kube-api-access-cwpxx") pod "53f6fb79-54ff-4a24-ad53-5812b6faa504" (UID: "53f6fb79-54ff-4a24-ad53-5812b6faa504"). InnerVolumeSpecName "kube-api-access-cwpxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.721453 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/53f6fb79-54ff-4a24-ad53-5812b6faa504-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.721496 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwpxx\" (UniqueName: \"kubernetes.io/projected/53f6fb79-54ff-4a24-ad53-5812b6faa504-kube-api-access-cwpxx\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:09 crc kubenswrapper[4985]: I0128 18:36:09.836101 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009850 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" event={"ID":"53f6fb79-54ff-4a24-ad53-5812b6faa504","Type":"ContainerDied","Data":"56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5"} Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009890 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56df849bc6eab86fbb2f1c43e6b3abacfd8cf4d3de99598c0ea16866523869b5" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.009932 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-ba0b-account-create-update-56qr8" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.033711 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.654281 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: E0128 18:36:10.655532 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.655568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: E0128 18:36:10.655630 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.655639 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.656182 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" containerName="mariadb-database-create" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.656217 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" containerName="mariadb-account-create-update" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.673458 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.676835 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.708027 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769156 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.769340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.871985 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.872068 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.872214 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.879237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.894710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:10 crc kubenswrapper[4985]: I0128 18:36:10.896530 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"mysqld-exporter-0\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " pod="openstack/mysqld-exporter-0" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.003738 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.021208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"58e488e3d5fd637191d4b86c732b0fb14d5b332b19c89bed60cee07e1e816c5f"} Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186213 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186785 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.186869 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.187905 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.187965 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" gracePeriod=600 Jan 28 18:36:11 crc kubenswrapper[4985]: I0128 18:36:11.507370 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:36:11 crc kubenswrapper[4985]: W0128 18:36:11.519795 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod558a195a_5deb_441a_9eeb_9e506f49597e.slice/crio-85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583 WatchSource:0}: Error finding container 85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583: Status 404 returned error can't find the container with id 85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583 Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039710 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" exitCode=0 Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a"} Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.039834 4985 scope.go:117] "RemoveContainer" containerID="68c147e3d0c646190ed92593bf974e9555950a450b92447009beba1ebe5c7093" Jan 28 18:36:12 crc kubenswrapper[4985]: I0128 18:36:12.041656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerStarted","Data":"85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583"} Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.069005 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.263177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:13 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:13 crc kubenswrapper[4985]: > Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.330634 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.334906 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-f287q" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.575876 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.577296 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.579687 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.591661 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666026 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.666828 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770157 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770263 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770317 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770398 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770451 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770605 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.770611 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.772861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.773156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.809994 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"ovn-controller-9r84t-config-w57rc\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:13 crc kubenswrapper[4985]: I0128 18:36:13.895779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:18 crc kubenswrapper[4985]: I0128 18:36:18.218130 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:18 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:18 crc kubenswrapper[4985]: > Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.834866 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.863789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.876313 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: connect: connection refused" Jan 28 18:36:19 crc kubenswrapper[4985]: I0128 18:36:19.980567 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:36:20 crc kubenswrapper[4985]: E0128 18:36:20.180953 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:20 crc kubenswrapper[4985]: E0128 18:36:20.182100 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:22 crc kubenswrapper[4985]: E0128 18:36:22.172854 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:44796->38.102.83.195:43365: write tcp 38.102.83.195:44796->38.102.83.195:43365: write: connection reset by peer Jan 28 18:36:23 crc kubenswrapper[4985]: I0128 18:36:23.230454 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-9r84t" podUID="2d1c1ab5-7e43-47cd-8218-3d945574a79c" containerName="ovn-controller" probeResult="failure" output=< Jan 28 18:36:23 crc kubenswrapper[4985]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 18:36:23 crc kubenswrapper[4985]: > Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.748237 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34" Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.748795 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:thanos-sidecar,Image:registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34,Command:[],Args:[sidecar --prometheus.url=http://localhost:9090/ --grpc-address=:10901 --http-address=:10902 --log.level=info --prometheus.http-client-file=/etc/thanos/config/prometheus.http-client-file.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:10902,Protocol:TCP,HostIP:,},ContainerPort{Name:grpc,HostPort:0,ContainerPort:10901,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:thanos-prometheus-http-client-file,ReadOnly:false,MountPath:/etc/thanos/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gv7d7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-metric-storage-0_openstack(96162e6f-966d-438d-9362-ef03abc4b277): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 18:36:24 crc kubenswrapper[4985]: E0128 18:36:24.750206 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:25 crc kubenswrapper[4985]: E0128 18:36:25.216262 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:25 crc kubenswrapper[4985]: I0128 18:36:25.713658 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.236975 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerStarted","Data":"00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.238806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerStarted","Data":"4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.240463 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerStarted","Data":"fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.243878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"c3ec7d3fe0003c26958c7864faa954b76fb034fc6cf4e9cb82bb3285bbd8166b"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.244026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"6b257804b520f072ee726aff4dbcbcf2026530dc7877d9752f22ff8244f8ff71"} Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.267240 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-9r84t-config-w57rc" podStartSLOduration=13.267223453 podStartE2EDuration="13.267223453s" podCreationTimestamp="2026-01-28 18:36:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:26.259086853 +0000 UTC m=+1397.085649684" watchObservedRunningTime="2026-01-28 18:36:26.267223453 +0000 UTC m=+1397.093786284" Jan 28 18:36:26 crc kubenswrapper[4985]: I0128 18:36:26.283779 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=2.008000221 podStartE2EDuration="16.283755949s" podCreationTimestamp="2026-01-28 18:36:10 +0000 UTC" firstStartedPulling="2026-01-28 18:36:11.522076763 +0000 UTC m=+1382.348639584" lastFinishedPulling="2026-01-28 18:36:25.797832491 +0000 UTC m=+1396.624395312" observedRunningTime="2026-01-28 18:36:26.276079573 +0000 UTC m=+1397.102642414" watchObservedRunningTime="2026-01-28 18:36:26.283755949 +0000 UTC m=+1397.110318770" Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.256130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerStarted","Data":"8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.262912 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"8dbad6fa2c438cc753b49e19a89b77bbaf282f34ff8f978e465f45a415960ca5"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.262940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"04c96766cb4d8a87148edb5b1ddcfd2b3727e7bdb901b73bfa11bcf50a0f983d"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.268471 4985 generic.go:334] "Generic (PLEG): container finished" podID="3aa41169-20ef-41dd-a534-929618c93ecf" containerID="00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806" exitCode=0 Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.290408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-5q5qm" podStartSLOduration=4.660679958 podStartE2EDuration="24.290379789s" podCreationTimestamp="2026-01-28 18:36:03 +0000 UTC" firstStartedPulling="2026-01-28 18:36:06.041204216 +0000 UTC m=+1376.867767037" lastFinishedPulling="2026-01-28 18:36:25.670904047 +0000 UTC m=+1396.497466868" observedRunningTime="2026-01-28 18:36:27.280332245 +0000 UTC m=+1398.106895116" watchObservedRunningTime="2026-01-28 18:36:27.290379789 +0000 UTC m=+1398.116942620" Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.291073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerDied","Data":"00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806"} Jan 28 18:36:27 crc kubenswrapper[4985]: I0128 18:36:27.320674 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:27 crc kubenswrapper[4985]: E0128 18:36:27.323978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"thanos-sidecar\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/thanos-rhel9@sha256:a223bab813b82d698992490bbb60927f6288a83ba52d539836c250e1471f6d34\\\"\"" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.241422 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-9r84t" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.285197 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"076d26c8df1b7770317a62e3822c0b7e7c64be3f432b53e1acb7682dcd2cceca"} Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.766964 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939724 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939881 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939909 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.939969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run" (OuterVolumeSpecName: "var-run") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.940085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") pod \"3aa41169-20ef-41dd-a534-929618c93ecf\" (UID: \"3aa41169-20ef-41dd-a534-929618c93ecf\") " Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.940394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941072 4985 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941270 4985 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941345 4985 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/3aa41169-20ef-41dd-a534-929618c93ecf-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.941697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts" (OuterVolumeSpecName: "scripts") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:28 crc kubenswrapper[4985]: I0128 18:36:28.944935 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x" (OuterVolumeSpecName: "kube-api-access-fk44x") pod "3aa41169-20ef-41dd-a534-929618c93ecf" (UID: "3aa41169-20ef-41dd-a534-929618c93ecf"). InnerVolumeSpecName "kube-api-access-fk44x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.045878 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fk44x\" (UniqueName: \"kubernetes.io/projected/3aa41169-20ef-41dd-a534-929618c93ecf-kube-api-access-fk44x\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.046027 4985 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.046100 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3aa41169-20ef-41dd-a534-929618c93ecf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.309795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"1575f9da4f7494ff2e663abc8f87f3ad4b9b386bc83e6473f8c00a9cd27df0ea"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.310958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"fed3b390b9c40225e985f6c2393c1d7a2a36e9df0162c3b8c0adf2a9c7e328b7"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.311047 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"27712d7d1daf801f78a0b80b4bbdd672994f4e9e9365e368d71d8b5b7c9ef2d1"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-9r84t-config-w57rc" event={"ID":"3aa41169-20ef-41dd-a534-929618c93ecf","Type":"ContainerDied","Data":"4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626"} Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315272 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4304837e07a0d35b09132d3b8151c66561b180c75ce1afd3868d65e25580b626" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.315431 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-9r84t-config-w57rc" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.836431 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.866404 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.880431 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.927112 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:29 crc kubenswrapper[4985]: I0128 18:36:29.975045 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-9r84t-config-w57rc"] Jan 28 18:36:30 crc kubenswrapper[4985]: E0128 18:36:30.427740 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:31 crc kubenswrapper[4985]: I0128 18:36:31.283020 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" path="/var/lib/kubelet/pods/3aa41169-20ef-41dd-a534-929618c93ecf/volumes" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.003123 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:32 crc kubenswrapper[4985]: E0128 18:36:32.004229 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.004268 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.004513 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3aa41169-20ef-41dd-a534-929618c93ecf" containerName="ovn-config" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.005405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.018953 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.113059 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.114868 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.132354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.134229 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.138571 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.140308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.140405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.146535 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.219029 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.243954 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.245559 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.246526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247342 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247391 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247409 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.247478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.248096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.250008 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.256887 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.275716 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"cinder-db-create-888tv\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.323166 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.344800 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.348485 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351399 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.351767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.352746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.353733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.353660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.361749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.369003 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373664 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373895 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.373906 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.374705 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.376838 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.378665 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"cinder-2623-account-create-update-nvftp\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.380737 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"barbican-db-create-4fswm\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"42677b6b45768a4e26c82339836f4a6db3c2dedb5d1ffef03d828c3bd95e3e76"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"1b5a405cd605ca085e8584ec02e29d6e26dde2f6f00eb347f3a66f2f2443b2f2"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.400144 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"fb920d8e8896d7004cd6fa0213cefc59b68255aacd2a26e34a6588f3e7ed5920"} Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.401792 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.417307 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.419569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.423399 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.430929 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454803 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454861 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.454934 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455023 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455062 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455083 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.455106 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.504269 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.543028 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568350 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568885 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568920 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.568990 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.569182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.571150 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.597090 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.597706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.603182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.612811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.641780 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.643578 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"keystone-db-sync-49fs2\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.645356 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"heat-db-create-5stnz\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.648168 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"barbican-8d89-account-create-update-8fw8c\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.652773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"heat-4d8b-account-create-update-hg9ms\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.707910 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.709034 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.712567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.720894 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.721029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.747317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.754187 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.760313 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.791925 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.815059 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.815500 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.816294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.816641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.880845 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919522 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919763 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.919968 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.921656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.923077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.950190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"neutron-2615-account-create-update-8xhkc\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:32 crc kubenswrapper[4985]: I0128 18:36:32.950206 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"neutron-db-create-br7rn\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.034369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.130562 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.152359 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.223292 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.299047 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.336478 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod768c2a33_259c_4194_ad30_8edffff92f18.slice/crio-ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8 WatchSource:0}: Error finding container ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8: Status 404 returned error can't find the container with id ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.428498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"7a3e375cf12b62b77d537920d93c88b87a81ab9b2fcc13e3d4b3a1320640e098"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.428549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"0cc2d532b2530baaebe34b9718d266139d05a97dafff3dd3a0e496b978a9a594"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.431899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerStarted","Data":"ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.435153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerStarted","Data":"878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.441948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerStarted","Data":"4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257"} Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.515988 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.521211 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd7074267_6514_4b90_9aef_a4df05b52054.slice/crio-75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140 WatchSource:0}: Error finding container 75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140: Status 404 returned error can't find the container with id 75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140 Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.911538 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6c3b6ba3_2c25_4da1_b02f_de0e776383c1.slice/crio-1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651 WatchSource:0}: Error finding container 1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651: Status 404 returned error can't find the container with id 1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.923508 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.926994 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod887f886a_9541_4075_9d32_0d8feaf32722.slice/crio-984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed WatchSource:0}: Error finding container 984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed: Status 404 returned error can't find the container with id 984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.951602 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:36:33 crc kubenswrapper[4985]: W0128 18:36:33.968344 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc052fbc1_a102_456b_8658_c954fe91534b.slice/crio-1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107 WatchSource:0}: Error finding container 1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107: Status 404 returned error can't find the container with id 1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107 Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.978632 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:36:33 crc kubenswrapper[4985]: I0128 18:36:33.991014 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.083002 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:36:34 crc kubenswrapper[4985]: W0128 18:36:34.097744 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0fc487cd_a539_4daa_8c13_40d0cea82770.slice/crio-9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6 WatchSource:0}: Error finding container 9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6: Status 404 returned error can't find the container with id 9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6 Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.452651 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerStarted","Data":"62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.455933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerStarted","Data":"d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.457680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerStarted","Data":"82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.457720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerStarted","Data":"9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.460757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerStarted","Data":"f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.460811 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerStarted","Data":"984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.465707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerStarted","Data":"92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.465765 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerStarted","Data":"75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.470153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerStarted","Data":"0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.470214 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerStarted","Data":"1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.472149 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerStarted","Data":"1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.473305 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4fswm" podStartSLOduration=2.473279469 podStartE2EDuration="2.473279469s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.46976641 +0000 UTC m=+1405.296329231" watchObservedRunningTime="2026-01-28 18:36:34.473279469 +0000 UTC m=+1405.299842310" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.475286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerStarted","Data":"fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.475357 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerStarted","Data":"a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.484475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"dbb518cab5a475ed6aa31748656a73c8cab2f8878123d8f312714ec43804fa4c"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.484520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"4b55b35c-0ef1-4db8-b435-24de7fda8ecc","Type":"ContainerStarted","Data":"04e7cc17bd0f13ac1e9e12cf6ab2e9775bdddb78309ecd4b7396742d6ad1664e"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.490220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerStarted","Data":"6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0"} Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.503458 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-4d8b-account-create-update-hg9ms" podStartSLOduration=2.50343401 podStartE2EDuration="2.50343401s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.482943782 +0000 UTC m=+1405.309506603" watchObservedRunningTime="2026-01-28 18:36:34.50343401 +0000 UTC m=+1405.329996831" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.506299 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-888tv" podStartSLOduration=3.5062807510000003 podStartE2EDuration="3.506280751s" podCreationTimestamp="2026-01-28 18:36:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.496637798 +0000 UTC m=+1405.323200639" watchObservedRunningTime="2026-01-28 18:36:34.506280751 +0000 UTC m=+1405.332843582" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.543573 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-br7rn" podStartSLOduration=2.5435514230000003 podStartE2EDuration="2.543551423s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.511726504 +0000 UTC m=+1405.338289325" watchObservedRunningTime="2026-01-28 18:36:34.543551423 +0000 UTC m=+1405.370114264" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.577935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-create-5stnz" podStartSLOduration=2.577916253 podStartE2EDuration="2.577916253s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.525576735 +0000 UTC m=+1405.352139556" watchObservedRunningTime="2026-01-28 18:36:34.577916253 +0000 UTC m=+1405.404479064" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.583440 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-2615-account-create-update-8xhkc" podStartSLOduration=2.583431209 podStartE2EDuration="2.583431209s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.54063366 +0000 UTC m=+1405.367196491" watchObservedRunningTime="2026-01-28 18:36:34.583431209 +0000 UTC m=+1405.409994020" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.587429 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-2623-account-create-update-nvftp" podStartSLOduration=2.587420721 podStartE2EDuration="2.587420721s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.555280334 +0000 UTC m=+1405.381843155" watchObservedRunningTime="2026-01-28 18:36:34.587420721 +0000 UTC m=+1405.413983542" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.599634 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8d89-account-create-update-8fw8c" podStartSLOduration=2.599615496 podStartE2EDuration="2.599615496s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:34.569044803 +0000 UTC m=+1405.395607624" watchObservedRunningTime="2026-01-28 18:36:34.599615496 +0000 UTC m=+1405.426178317" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.620131 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=37.276589837 podStartE2EDuration="58.620106744s" podCreationTimestamp="2026-01-28 18:35:36 +0000 UTC" firstStartedPulling="2026-01-28 18:36:10.043712925 +0000 UTC m=+1380.870275746" lastFinishedPulling="2026-01-28 18:36:31.387229832 +0000 UTC m=+1402.213792653" observedRunningTime="2026-01-28 18:36:34.612383716 +0000 UTC m=+1405.438946537" watchObservedRunningTime="2026-01-28 18:36:34.620106744 +0000 UTC m=+1405.446669555" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.901853 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.904042 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.912244 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 18:36:34 crc kubenswrapper[4985]: I0128 18:36:34.919773 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094592 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094656 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.094998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095165 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095611 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.095670 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: E0128 18:36:35.188432 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198091 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198177 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198708 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.198751 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.199315 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.199536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200467 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.200879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.220394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"dnsmasq-dns-5c79d794d7-cv528\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.232285 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.501515 4985 generic.go:334] "Generic (PLEG): container finished" podID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerID="62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.502224 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerDied","Data":"62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.506037 4985 generic.go:334] "Generic (PLEG): container finished" podID="0a7822ab-0225-4deb-a283-374e32bc995f" containerID="d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.506096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerDied","Data":"d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.508145 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerID="fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.508229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerDied","Data":"fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.509692 4985 generic.go:334] "Generic (PLEG): container finished" podID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerID="82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.509761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerDied","Data":"82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.511875 4985 generic.go:334] "Generic (PLEG): container finished" podID="887f886a-9541-4075-9d32-0d8feaf32722" containerID="f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.511948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerDied","Data":"f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.514808 4985 generic.go:334] "Generic (PLEG): container finished" podID="768c2a33-259c-4194-ad30-8edffff92f18" containerID="6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.514898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerDied","Data":"6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.520806 4985 generic.go:334] "Generic (PLEG): container finished" podID="d7074267-6514-4b90-9aef-a4df05b52054" containerID="92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.520945 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerDied","Data":"92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.522640 4985 generic.go:334] "Generic (PLEG): container finished" podID="c052fbc1-a102-456b-8658-c954fe91534b" containerID="0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce" exitCode=0 Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.522710 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerDied","Data":"0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce"} Jan 28 18:36:35 crc kubenswrapper[4985]: I0128 18:36:35.786426 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.535614 4985 generic.go:334] "Generic (PLEG): container finished" podID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerID="d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963" exitCode=0 Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.535890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963"} Jan 28 18:36:36 crc kubenswrapper[4985]: I0128 18:36:36.542428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerStarted","Data":"c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc"} Jan 28 18:36:37 crc kubenswrapper[4985]: I0128 18:36:37.320350 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:37 crc kubenswrapper[4985]: I0128 18:36:37.323469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.154654 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.165523 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.175147 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") pod \"d7074267-6514-4b90-9aef-a4df05b52054\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299751 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") pod \"768c2a33-259c-4194-ad30-8edffff92f18\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.299885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") pod \"887f886a-9541-4075-9d32-0d8feaf32722\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300040 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") pod \"d7074267-6514-4b90-9aef-a4df05b52054\" (UID: \"d7074267-6514-4b90-9aef-a4df05b52054\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300082 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") pod \"887f886a-9541-4075-9d32-0d8feaf32722\" (UID: \"887f886a-9541-4075-9d32-0d8feaf32722\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300300 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") pod \"768c2a33-259c-4194-ad30-8edffff92f18\" (UID: \"768c2a33-259c-4194-ad30-8edffff92f18\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.300716 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "768c2a33-259c-4194-ad30-8edffff92f18" (UID: "768c2a33-259c-4194-ad30-8edffff92f18"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301111 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d7074267-6514-4b90-9aef-a4df05b52054" (UID: "d7074267-6514-4b90-9aef-a4df05b52054"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301358 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/768c2a33-259c-4194-ad30-8edffff92f18-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301380 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d7074267-6514-4b90-9aef-a4df05b52054-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.301511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "887f886a-9541-4075-9d32-0d8feaf32722" (UID: "887f886a-9541-4075-9d32-0d8feaf32722"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.305459 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr" (OuterVolumeSpecName: "kube-api-access-f6gwr") pod "d7074267-6514-4b90-9aef-a4df05b52054" (UID: "d7074267-6514-4b90-9aef-a4df05b52054"). InnerVolumeSpecName "kube-api-access-f6gwr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.305713 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n" (OuterVolumeSpecName: "kube-api-access-7bv6n") pod "768c2a33-259c-4194-ad30-8edffff92f18" (UID: "768c2a33-259c-4194-ad30-8edffff92f18"). InnerVolumeSpecName "kube-api-access-7bv6n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.306873 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr" (OuterVolumeSpecName: "kube-api-access-cbjjr") pod "887f886a-9541-4075-9d32-0d8feaf32722" (UID: "887f886a-9541-4075-9d32-0d8feaf32722"). InnerVolumeSpecName "kube-api-access-cbjjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.351026 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.401470 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403146 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/887f886a-9541-4075-9d32-0d8feaf32722-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403165 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbjjr\" (UniqueName: \"kubernetes.io/projected/887f886a-9541-4075-9d32-0d8feaf32722-kube-api-access-cbjjr\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403175 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7bv6n\" (UniqueName: \"kubernetes.io/projected/768c2a33-259c-4194-ad30-8edffff92f18-kube-api-access-7bv6n\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.403185 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6gwr\" (UniqueName: \"kubernetes.io/projected/d7074267-6514-4b90-9aef-a4df05b52054-kube-api-access-f6gwr\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.455569 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.481425 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.490516 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511714 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") pod \"c052fbc1-a102-456b-8658-c954fe91534b\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511937 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") pod \"c052fbc1-a102-456b-8658-c954fe91534b\" (UID: \"c052fbc1-a102-456b-8658-c954fe91534b\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.511995 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") pod \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.512152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") pod \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\" (UID: \"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.512718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c052fbc1-a102-456b-8658-c954fe91534b" (UID: "c052fbc1-a102-456b-8658-c954fe91534b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.513172 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c052fbc1-a102-456b-8658-c954fe91534b-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.516210 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" (UID: "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.530890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6" (OuterVolumeSpecName: "kube-api-access-k7lz6") pod "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" (UID: "3bd289b0-2807-4b7e-bdc0-300fe0ce09b2"). InnerVolumeSpecName "kube-api-access-k7lz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.531380 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9" (OuterVolumeSpecName: "kube-api-access-sd4g9") pod "c052fbc1-a102-456b-8658-c954fe91534b" (UID: "c052fbc1-a102-456b-8658-c954fe91534b"). InnerVolumeSpecName "kube-api-access-sd4g9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") pod \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") pod \"0fc487cd-a539-4daa-8c13-40d0cea82770\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648167 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") pod \"0a7822ab-0225-4deb-a283-374e32bc995f\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648205 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") pod \"0fc487cd-a539-4daa-8c13-40d0cea82770\" (UID: \"0fc487cd-a539-4daa-8c13-40d0cea82770\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") pod \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\" (UID: \"6d078ca4-34dd-4a65-a2e4-ffc23f098285\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.648440 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") pod \"0a7822ab-0225-4deb-a283-374e32bc995f\" (UID: \"0a7822ab-0225-4deb-a283-374e32bc995f\") " Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649652 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd4g9\" (UniqueName: \"kubernetes.io/projected/c052fbc1-a102-456b-8658-c954fe91534b-kube-api-access-sd4g9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.649684 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7lz6\" (UniqueName: \"kubernetes.io/projected/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2-kube-api-access-k7lz6\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.654520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6d078ca4-34dd-4a65-a2e4-ffc23f098285" (UID: "6d078ca4-34dd-4a65-a2e4-ffc23f098285"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.655381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0fc487cd-a539-4daa-8c13-40d0cea82770" (UID: "0fc487cd-a539-4daa-8c13-40d0cea82770"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656396 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-br7rn" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-br7rn" event={"ID":"0fc487cd-a539-4daa-8c13-40d0cea82770","Type":"ContainerDied","Data":"9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.656591 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b481be9c716c9f39beb62f480cfddd2a42621477214ff033e05b2b5b835ffc6" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.660458 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f" (OuterVolumeSpecName: "kube-api-access-9nh2f") pod "0a7822ab-0225-4deb-a283-374e32bc995f" (UID: "0a7822ab-0225-4deb-a283-374e32bc995f"). InnerVolumeSpecName "kube-api-access-9nh2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.660908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0a7822ab-0225-4deb-a283-374e32bc995f" (UID: "0a7822ab-0225-4deb-a283-374e32bc995f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.670694 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6" (OuterVolumeSpecName: "kube-api-access-sznc6") pod "6d078ca4-34dd-4a65-a2e4-ffc23f098285" (UID: "6d078ca4-34dd-4a65-a2e4-ffc23f098285"). InnerVolumeSpecName "kube-api-access-sznc6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.670892 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh" (OuterVolumeSpecName: "kube-api-access-2jvxh") pod "0fc487cd-a539-4daa-8c13-40d0cea82770" (UID: "0fc487cd-a539-4daa-8c13-40d0cea82770"). InnerVolumeSpecName "kube-api-access-2jvxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671689 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-2623-account-create-update-nvftp" event={"ID":"768c2a33-259c-4194-ad30-8edffff92f18","Type":"ContainerDied","Data":"ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671739 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecb3d72abfb6529e55bef966e16c1f2c1354aa8a5b5b348c81d42fb89721fca8" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.671827 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-2623-account-create-update-nvftp" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.706402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754647 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6d078ca4-34dd-4a65-a2e4-ffc23f098285-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754677 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0fc487cd-a539-4daa-8c13-40d0cea82770-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754688 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nh2f\" (UniqueName: \"kubernetes.io/projected/0a7822ab-0225-4deb-a283-374e32bc995f-kube-api-access-9nh2f\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754699 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2jvxh\" (UniqueName: \"kubernetes.io/projected/0fc487cd-a539-4daa-8c13-40d0cea82770-kube-api-access-2jvxh\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754708 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sznc6\" (UniqueName: \"kubernetes.io/projected/6d078ca4-34dd-4a65-a2e4-ffc23f098285-kube-api-access-sznc6\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.754716 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0a7822ab-0225-4deb-a283-374e32bc995f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-888tv" event={"ID":"0a7822ab-0225-4deb-a283-374e32bc995f","Type":"ContainerDied","Data":"4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4db841a9fa2f43f46ed12fc0c9a23942efbd002515556520702636ac918f8257" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.755716 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-888tv" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.761364 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=20.614142907 podStartE2EDuration="1m34.761348512s" podCreationTimestamp="2026-01-28 18:35:05 +0000 UTC" firstStartedPulling="2026-01-28 18:35:25.011013359 +0000 UTC m=+1335.837576180" lastFinishedPulling="2026-01-28 18:36:39.158218954 +0000 UTC m=+1409.984781785" observedRunningTime="2026-01-28 18:36:39.759531651 +0000 UTC m=+1410.586094462" watchObservedRunningTime="2026-01-28 18:36:39.761348512 +0000 UTC m=+1410.587911333" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.771624 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerStarted","Data":"9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.771847 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.799892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4fswm" event={"ID":"6d078ca4-34dd-4a65-a2e4-ffc23f098285","Type":"ContainerDied","Data":"878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.799944 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="878c0f310728825bfc3a9f3a42766e3d3fb0ed9db3ca505b2503200e2ee6fa77" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.800047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4fswm" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831595 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-2615-account-create-update-8xhkc" event={"ID":"3bd289b0-2807-4b7e-bdc0-300fe0ce09b2","Type":"ContainerDied","Data":"a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831634 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a168fe30db9e1f0ecb67e71918d9ed1854222d5e171487ecaf9036aefbf99081" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.831717 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-2615-account-create-update-8xhkc" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.878811 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" podStartSLOduration=5.878789847 podStartE2EDuration="5.878789847s" podCreationTimestamp="2026-01-28 18:36:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:39.866799169 +0000 UTC m=+1410.693361990" watchObservedRunningTime="2026-01-28 18:36:39.878789847 +0000 UTC m=+1410.705352668" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-4d8b-account-create-update-hg9ms" event={"ID":"887f886a-9541-4075-9d32-0d8feaf32722","Type":"ContainerDied","Data":"984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890835 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="984e4e85639a956b60501d757e9602c30171f0c99cac004139cea3d3065189ed" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.890914 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-4d8b-account-create-update-hg9ms" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900224 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-create-5stnz" event={"ID":"d7074267-6514-4b90-9aef-a4df05b52054","Type":"ContainerDied","Data":"75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900278 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f942ae970ad028b425e9af3a3f818f393271df882679e9573bc257f9498140" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.900347 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-create-5stnz" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908368 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8d89-account-create-update-8fw8c" event={"ID":"c052fbc1-a102-456b-8658-c954fe91534b","Type":"ContainerDied","Data":"1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107"} Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908419 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f14ae2db62227ad2df0eb4aff6945386761f0321f1e22dc06d06af0bbe4a107" Jan 28 18:36:39 crc kubenswrapper[4985]: I0128 18:36:39.908513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8d89-account-create-update-8fw8c" Jan 28 18:36:40 crc kubenswrapper[4985]: E0128 18:36:40.690723 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.920640 4985 generic.go:334] "Generic (PLEG): container finished" podID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerID="8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d" exitCode=0 Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.920674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerDied","Data":"8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.924869 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerStarted","Data":"66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.927010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerStarted","Data":"ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b"} Jan 28 18:36:40 crc kubenswrapper[4985]: I0128 18:36:40.965614 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-49fs2" podStartSLOduration=3.723944768 podStartE2EDuration="8.96557827s" podCreationTimestamp="2026-01-28 18:36:32 +0000 UTC" firstStartedPulling="2026-01-28 18:36:33.916270753 +0000 UTC m=+1404.742833574" lastFinishedPulling="2026-01-28 18:36:39.157904255 +0000 UTC m=+1409.984467076" observedRunningTime="2026-01-28 18:36:40.955432184 +0000 UTC m=+1411.781995015" watchObservedRunningTime="2026-01-28 18:36:40.96557827 +0000 UTC m=+1411.792141091" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.326852 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.469106 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539645 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.539932 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.540013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") pod \"229b9159-df89-4859-b5f3-d34b2759d0fd\" (UID: \"229b9159-df89-4859-b5f3-d34b2759d0fd\") " Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.544987 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.551504 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl" (OuterVolumeSpecName: "kube-api-access-drvrl") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "kube-api-access-drvrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.569160 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.607895 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data" (OuterVolumeSpecName: "config-data") pod "229b9159-df89-4859-b5f3-d34b2759d0fd" (UID: "229b9159-df89-4859-b5f3-d34b2759d0fd"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642100 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642143 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642157 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/229b9159-df89-4859-b5f3-d34b2759d0fd-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.642170 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-drvrl\" (UniqueName: \"kubernetes.io/projected/229b9159-df89-4859-b5f3-d34b2759d0fd-kube-api-access-drvrl\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947555 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-5q5qm" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" containerID="cri-o://d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" gracePeriod=600 Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-5q5qm" event={"ID":"229b9159-df89-4859-b5f3-d34b2759d0fd","Type":"ContainerDied","Data":"08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09"} Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.948033 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08b2b218ba733f91c11c5e317ad93617dac7e3c043b5d4fce759166ed128bc09" Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.947768 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" containerID="cri-o://e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" gracePeriod=600 Jan 28 18:36:42 crc kubenswrapper[4985]: I0128 18:36:42.948097 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" containerID="cri-o://66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" gracePeriod=600 Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.329953 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.330184 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" containerID="cri-o://9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" gracePeriod=10 Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.388640 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389115 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389131 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389143 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389158 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389164 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389176 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389182 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389197 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389204 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389215 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389223 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389237 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389242 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389287 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389294 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: E0128 18:36:43.389304 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389309 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389485 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="887f886a-9541-4075-9d32-0d8feaf32722" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389500 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c052fbc1-a102-456b-8658-c954fe91534b" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389513 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389526 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="768c2a33-259c-4194-ad30-8edffff92f18" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389535 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7074267-6514-4b90-9aef-a4df05b52054" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389544 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" containerName="glance-db-sync" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389555 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" containerName="mariadb-account-create-update" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389565 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.389577 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" containerName="mariadb-database-create" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.390632 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.428546 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461528 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461591 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.461729 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563225 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563329 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563376 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.563401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.564107 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.564232 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.565559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.565767 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.567026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.599290 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"dnsmasq-dns-5f59b8f679-rtvmd\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:43 crc kubenswrapper[4985]: I0128 18:36:43.874836 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.005826 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006147 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006162 4985 generic.go:334] "Generic (PLEG): container finished" podID="96162e6f-966d-438d-9362-ef03abc4b277" containerID="e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006303 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006336 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006353 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006367 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"96162e6f-966d-438d-9362-ef03abc4b277","Type":"ContainerDied","Data":"e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.006379 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0335762536628c672e38c65f8ba0c729df89b224221c2b13c1cb19cb0e6ee22" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.008953 4985 generic.go:334] "Generic (PLEG): container finished" podID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerID="ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.009014 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerDied","Data":"ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012749 4985 generic.go:334] "Generic (PLEG): container finished" podID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerID="9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" exitCode=0 Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012806 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" event={"ID":"51c32b56-4c7e-47e9-b47e-7bcf6295d854","Type":"ContainerDied","Data":"c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc"} Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.012816 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c65b2c3c36b7551d10c8a76b6864da53073d25c462caf52ecb94744b028234fc" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.039356 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.041592 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191320 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191356 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191419 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191465 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191492 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191524 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191549 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191625 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") pod \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\" (UID: \"51c32b56-4c7e-47e9-b47e-7bcf6295d854\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191658 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191718 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191746 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191911 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191951 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.191972 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") pod \"96162e6f-966d-438d-9362-ef03abc4b277\" (UID: \"96162e6f-966d-438d-9362-ef03abc4b277\") " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.208744 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.212666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.226559 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.256265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.288725 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config" (OuterVolumeSpecName: "config") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303598 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303624 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303635 4985 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303649 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303660 4985 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/96162e6f-966d-438d-9362-ef03abc4b277-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.303849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7" (OuterVolumeSpecName: "kube-api-access-gv7d7") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "kube-api-access-gv7d7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.305826 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out" (OuterVolumeSpecName: "config-out") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.314717 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj" (OuterVolumeSpecName: "kube-api-access-tbssj") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "kube-api-access-tbssj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.321450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.408571 4985 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/96162e6f-966d-438d-9362-ef03abc4b277-config-out\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.409970 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbssj\" (UniqueName: \"kubernetes.io/projected/51c32b56-4c7e-47e9-b47e-7bcf6295d854-kube-api-access-tbssj\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.410050 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gv7d7\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-kube-api-access-gv7d7\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.410617 4985 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/96162e6f-966d-438d-9362-ef03abc4b277-tls-assets\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.427426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.447402 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config" (OuterVolumeSpecName: "web-config") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.447813 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.460117 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "96162e6f-966d-438d-9362-ef03abc4b277" (UID: "96162e6f-966d-438d-9362-ef03abc4b277"). InnerVolumeSpecName "pvc-8e57ef50-627c-40e8-9faa-6585e96efec9". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.503729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.503769 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config" (OuterVolumeSpecName: "config") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514072 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") on node \"crc\" " Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514106 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514119 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514129 4985 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/96162e6f-966d-438d-9362-ef03abc4b277-web-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514139 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.514147 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.531476 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "51c32b56-4c7e-47e9-b47e-7bcf6295d854" (UID: "51c32b56-4c7e-47e9-b47e-7bcf6295d854"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.574289 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.574589 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-8e57ef50-627c-40e8-9faa-6585e96efec9" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9") on node "crc" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.582371 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.616220 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/51c32b56-4c7e-47e9-b47e-7bcf6295d854-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:44 crc kubenswrapper[4985]: I0128 18:36:44.616266 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.028737 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerID="b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598" exitCode=0 Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.028855 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.030915 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598"} Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.030984 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerStarted","Data":"f74f0bb6300abf03a41f5514522429abdf0847f34f1d56df2ed73e73e25973ab"} Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.031005 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c79d794d7-cv528" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.099163 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.123652 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c79d794d7-cv528"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.134097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.143205 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.154539 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155028 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155045 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155062 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155068 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155095 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="init-config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155101 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="init-config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155108 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="init" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155114 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="init" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155129 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155135 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: E0128 18:36:45.155157 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155163 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155372 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="prometheus" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155391 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="thanos-sidecar" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155412 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" containerName="dnsmasq-dns" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.155428 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="96162e6f-966d-438d-9362-ef03abc4b277" containerName="config-reloader" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.157901 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162414 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162553 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162650 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-wj229" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162926 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.162963 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.163102 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.168743 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.170745 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.183848 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.185272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228108 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228403 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228469 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228558 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228602 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228674 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.228750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.276187 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c32b56-4c7e-47e9-b47e-7bcf6295d854" path="/var/lib/kubelet/pods/51c32b56-4c7e-47e9-b47e-7bcf6295d854/volumes" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.276900 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96162e6f-966d-438d-9362-ef03abc4b277" path="/var/lib/kubelet/pods/96162e6f-966d-438d-9362-ef03abc4b277/volumes" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.331548 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.332799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334496 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334556 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334612 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.334841 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335000 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335024 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335142 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.335165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.336164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.338149 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.339893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.343912 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.344382 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.347853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.348026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/3d356801-0ed0-4343-87a9-29d23453d621-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.348538 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.349585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/3d356801-0ed0-4343-87a9-29d23453d621-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.351629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.352235 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.352349 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/48fd35393a2bd67e182a1b8f0b6bc712b43ce2f1ef21a21dd138faec48abf12b/globalmount\"" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.357797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pczfz\" (UniqueName: \"kubernetes.io/projected/3d356801-0ed0-4343-87a9-29d23453d621-kube-api-access-pczfz\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.359272 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/3d356801-0ed0-4343-87a9-29d23453d621-config\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.410172 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8e57ef50-627c-40e8-9faa-6585e96efec9\") pod \"prometheus-metric-storage-0\" (UID: \"3d356801-0ed0-4343-87a9-29d23453d621\") " pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.492737 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.508938 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568000 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568105 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.568138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") pod \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\" (UID: \"6c3b6ba3-2c25-4da1-b02f-de0e776383c1\") " Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.577960 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2" (OuterVolumeSpecName: "kube-api-access-pdbg2") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "kube-api-access-pdbg2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.602838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.655346 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data" (OuterVolumeSpecName: "config-data") pod "6c3b6ba3-2c25-4da1-b02f-de0e776383c1" (UID: "6c3b6ba3-2c25-4da1-b02f-de0e776383c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670816 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670847 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdbg2\" (UniqueName: \"kubernetes.io/projected/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-kube-api-access-pdbg2\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:45 crc kubenswrapper[4985]: I0128 18:36:45.670862 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6c3b6ba3-2c25-4da1-b02f-de0e776383c1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.017155 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039721 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-49fs2" event={"ID":"6c3b6ba3-2c25-4da1-b02f-de0e776383c1","Type":"ContainerDied","Data":"1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039762 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b616f9e3ec4c319170e5680dda343c90b7cda9d924d473f9e17bb899d17b651" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.039813 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-49fs2" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.051220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerStarted","Data":"a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.051394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.055681 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"636462e069d2e5920aa31d8b295f607f9f97f02c2dc1a1b570b5034f342ccb08"} Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.091693 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" podStartSLOduration=3.091668621 podStartE2EDuration="3.091668621s" podCreationTimestamp="2026-01-28 18:36:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:46.080121545 +0000 UTC m=+1416.906684366" watchObservedRunningTime="2026-01-28 18:36:46.091668621 +0000 UTC m=+1416.918231442" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.228606 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.266782 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:46 crc kubenswrapper[4985]: E0128 18:36:46.270823 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.270866 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.281075 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" containerName="keystone-db-sync" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.283015 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.288661 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.289076 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.289382 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.290098 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.290345 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.312507 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.317852 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.326690 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.386188 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393712 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393760 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393923 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.393999 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394039 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394066 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394088 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.394107 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.487344 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.488704 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502567 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502598 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502678 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.502734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.511216 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.513599 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.518069 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.519194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.519797 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527450 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527622 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.527688 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.531353 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.533177 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.535787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.535976 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xd8p" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.537964 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.539936 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.543812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.550336 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.592982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"dnsmasq-dns-bbf5cc879-tgjz6\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.610354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.611709 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.626794 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"keystone-bootstrap-h27v9\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629587 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.629717 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637419 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637609 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.637706 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cnbtl" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.694322 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731329 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731392 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731489 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731506 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731524 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.731552 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.750097 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.756541 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.758050 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.761866 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.770311 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9qmf" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.770696 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.795304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.815117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.816671 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.838884 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.838999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839028 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839137 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839158 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839179 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839200 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.839222 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.864137 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.864950 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.865071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"heat-db-sync-qjrfx\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.897486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"neutron-db-sync-dwwcb\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.924087 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941030 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941198 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.941452 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.945372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.946996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.954841 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.955620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.957233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:46 crc kubenswrapper[4985]: I0128 18:36:46.965987 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:46.999134 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"cinder-db-sync-s8hs9\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.015137 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.016756 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.045699 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.046577 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.046771 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fpld6" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.057811 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.061645 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.100895 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.115121 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146371 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.146687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.156041 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.160642 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fl96f" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.160828 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.178950 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.238106 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.249361 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.251851 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.251971 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252088 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252217 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252332 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252426 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.252659 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.257891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.258536 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.263018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.263332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.264524 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.289190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.295013 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"placement-db-sync-8h4kr\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355074 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355223 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355243 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355386 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355417 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355459 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.355525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.358763 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.362267 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.405753 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.409542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"barbican-db-sync-9w9wm\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.436756 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.440923 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.445525 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.447536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459384 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459478 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459711 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.459735 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.461828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.462361 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.464159 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.495696 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"dnsmasq-dns-56df8fb6b7-zbf7x\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.509459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.527467 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.531679 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.535842 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.536082 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.536226 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-jbtcd" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.540813 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578515 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578855 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.578962 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.579067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.622805 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.624514 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681643 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681692 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681754 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681775 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681845 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681866 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681894 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681964 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.681985 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682055 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.682072 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.686639 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.687053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.692384 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.696835 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.699921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.700876 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.755419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"ceilometer-0\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: W0128 18:36:47.769422 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podedd90323_75fd_4b14_8cba_b1db7a93c2e2.slice/crio-9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194 WatchSource:0}: Error finding container 9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194: Status 404 returned error can't find the container with id 9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194 Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.777798 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785386 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785445 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.785776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.787049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.787523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.792566 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.798852 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.805398 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.808310 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.813910 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.829096 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.830237 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.839688 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.856633 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.856685 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.867989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888809 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888889 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.888917 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889115 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889296 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.889481 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.922769 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992805 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.992921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993013 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993054 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.993831 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:47 crc kubenswrapper[4985]: I0128 18:36:47.997656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.000928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.002010 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.002063 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.010092 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.011238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.035977 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.112047 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.119732 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.143906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.263907 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" containerID="cri-o://a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" gracePeriod=10 Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.264222 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerStarted","Data":"9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194"} Jan 28 18:36:48 crc kubenswrapper[4985]: E0128 18:36:48.372874 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:48 crc kubenswrapper[4985]: E0128 18:36:48.378583 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0fb3881_97de_41ce_a664_51e5d4dea3e1.slice/crio-a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.446422 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.626390 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.646307 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:36:48 crc kubenswrapper[4985]: I0128 18:36:48.661573 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.130443 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.141340 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.155393 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.182009 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.270858 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf788adab_3912_43da_869e_2450d65b761f.slice/crio-a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c WatchSource:0}: Error finding container a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c: Status 404 returned error can't find the container with id a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.275456 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6 WatchSource:0}: Error finding container 3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6: Status 404 returned error can't find the container with id 3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6 Jan 28 18:36:49 crc kubenswrapper[4985]: W0128 18:36:49.279746 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice/crio-bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199 WatchSource:0}: Error finding container bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199: Status 404 returned error can't find the container with id bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.330657 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerStarted","Data":"1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.354892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerStarted","Data":"f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.408718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerStarted","Data":"0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.409000 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" containerID="cri-o://0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" gracePeriod=10 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.440817 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerStarted","Data":"29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.465688 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerID="a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" exitCode=0 Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.466064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.469980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerStarted","Data":"94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5"} Jan 28 18:36:49 crc kubenswrapper[4985]: I0128 18:36:49.670152 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.008557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.123280 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.189586 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.200045 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.325465 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.336965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337135 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337232 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337290 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337437 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.337533 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") pod \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\" (UID: \"f0fb3881-97de-41ce-a664-51e5d4dea3e1\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.344895 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9" (OuterVolumeSpecName: "kube-api-access-pxmt9") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "kube-api-access-pxmt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.441117 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pxmt9\" (UniqueName: \"kubernetes.io/projected/f0fb3881-97de-41ce-a664-51e5d4dea3e1-kube-api-access-pxmt9\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.518464 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config" (OuterVolumeSpecName: "config") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.525691 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.543782 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.543813 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572354 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.572851 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="init" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572867 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="init" Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.572909 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.572915 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.573103 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" containerName="dnsmasq-dns" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.575491 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.582885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerStarted","Data":"461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.615473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.616831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.626895 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.629942 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerStarted","Data":"bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.645979 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.646103 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.646117 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.668742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" event={"ID":"f0fb3881-97de-41ce-a664-51e5d4dea3e1","Type":"ContainerDied","Data":"f74f0bb6300abf03a41f5514522429abdf0847f34f1d56df2ed73e73e25973ab"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.668816 4985 scope.go:117] "RemoveContainer" containerID="a6147749e550936512902312ff84cb22311c72f650197306797ae78d53b6737d" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.669125 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5f59b8f679-rtvmd" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.669512 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f0fb3881-97de-41ce-a664-51e5d4dea3e1" (UID: "f0fb3881-97de-41ce-a664-51e5d4dea3e1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.681855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerStarted","Data":"12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.695589 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-dwwcb" podStartSLOduration=4.69556422 podStartE2EDuration="4.69556422s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:50.628111366 +0000 UTC m=+1421.454674187" watchObservedRunningTime="2026-01-28 18:36:50.69556422 +0000 UTC m=+1421.522127041" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.716710 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerStarted","Data":"d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.720607 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-h27v9" podStartSLOduration=4.720581186 podStartE2EDuration="4.720581186s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:50.702866566 +0000 UTC m=+1421.529429387" watchObservedRunningTime="2026-01-28 18:36:50.720581186 +0000 UTC m=+1421.547144037" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.724465 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.732673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerStarted","Data":"a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.749566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.749818 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.750159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.750341 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f0fb3881-97de-41ce-a664-51e5d4dea3e1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.751457 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.752396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.787103 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.797645 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"redhat-operators-mbtp6\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.817928 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerID="0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" exitCode=0 Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.818086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerDied","Data":"0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.838561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"901d4da2ea774977403413c52d844a7d397bdd9df889717b5e5f413275ab1407"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.852449 4985 scope.go:117] "RemoveContainer" containerID="b25b93afe5c0b9bcdcecf1bc670732171d335e6245638df0593c3602ff20f598" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853861 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853970 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.853997 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.854104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.854123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.858271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf"} Jan 28 18:36:50 crc kubenswrapper[4985]: E0128 18:36:50.866573 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.879502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerStarted","Data":"4d27cc9d7c9abb101a5028da312f83cf7530369c6dbbf15f3f10f537bfca14e2"} Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.900062 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd" (OuterVolumeSpecName: "kube-api-access-m7vbd") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "kube-api-access-m7vbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.944924 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.945261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.948924 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.956053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config" (OuterVolumeSpecName: "config") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.956493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") pod \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\" (UID: \"edd90323-75fd-4b14-8cba-b1db7a93c2e2\") " Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957772 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957800 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957810 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.957821 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7vbd\" (UniqueName: \"kubernetes.io/projected/edd90323-75fd-4b14-8cba-b1db7a93c2e2-kube-api-access-m7vbd\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:50 crc kubenswrapper[4985]: W0128 18:36:50.958710 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/edd90323-75fd-4b14-8cba-b1db7a93c2e2/volumes/kubernetes.io~configmap/config Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.958726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config" (OuterVolumeSpecName: "config") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:50 crc kubenswrapper[4985]: I0128 18:36:50.982451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "edd90323-75fd-4b14-8cba-b1db7a93c2e2" (UID: "edd90323-75fd-4b14-8cba-b1db7a93c2e2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.035078 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.046896 4985 scope.go:117] "RemoveContainer" containerID="0a9323753e3370f5deb9e3fe12803761651ac2f2ff4a5d5c2eb6c176ae9f5e26" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.061559 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.061588 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/edd90323-75fd-4b14-8cba-b1db7a93c2e2-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.125455 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.139954 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5f59b8f679-rtvmd"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.316793 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0fb3881-97de-41ce-a664-51e5d4dea3e1" path="/var/lib/kubelet/pods/f0fb3881-97de-41ce-a664-51e5d4dea3e1/volumes" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.769376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.956787 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958203 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:51 crc kubenswrapper[4985]: E0128 18:36:51.958585 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958597 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.958859 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" containerName="init" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.964611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-bbf5cc879-tgjz6" event={"ID":"edd90323-75fd-4b14-8cba-b1db7a93c2e2","Type":"ContainerDied","Data":"9ad1aa8387f0d8b5f62df594e67c9ee70778bda664a9150cecd6885e74d02194"} Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.964749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:51 crc kubenswrapper[4985]: I0128 18:36:51.996558 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.007063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.018566 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerID="f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992" exitCode=0 Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.018648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.056211 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"cb6d06c38f976feb1cb400142c94c846180c10a5200e7df25e3c5053c66cb609"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.088170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerStarted","Data":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.096356 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100681 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.100777 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.216744 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217052 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.217271 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.236818 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-bbf5cc879-tgjz6"] Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.252975 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.386559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"certified-operators-8fg44\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:52 crc kubenswrapper[4985]: I0128 18:36:52.659383 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.217105 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerStarted","Data":"16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e"} Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.218768 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.244301 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" podStartSLOduration=6.244282296 podStartE2EDuration="6.244282296s" podCreationTimestamp="2026-01-28 18:36:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:53.237741801 +0000 UTC m=+1424.064304622" watchObservedRunningTime="2026-01-28 18:36:53.244282296 +0000 UTC m=+1424.070845117" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.264828 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff" exitCode=0 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.278106 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" containerID="cri-o://2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" gracePeriod=30 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.278455 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" containerID="cri-o://f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" gracePeriod=30 Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.367707 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd90323-75fd-4b14-8cba-b1db7a93c2e2" path="/var/lib/kubelet/pods/edd90323-75fd-4b14-8cba-b1db7a93c2e2/volumes" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.368639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff"} Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.433335 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=7.433312363 podStartE2EDuration="7.433312363s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:53.347002856 +0000 UTC m=+1424.173565677" watchObservedRunningTime="2026-01-28 18:36:53.433312363 +0000 UTC m=+1424.259875184" Jan 28 18:36:53 crc kubenswrapper[4985]: I0128 18:36:53.444268 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.271241 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319830 4985 generic.go:334] "Generic (PLEG): container finished" podID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" exitCode=143 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319865 4985 generic.go:334] "Generic (PLEG): container finished" podID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" exitCode=143 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.319990 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320625 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320688 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"94d84421-da66-4847-bfcc-f2fc38d072e7","Type":"ContainerDied","Data":"4d27cc9d7c9abb101a5028da312f83cf7530369c6dbbf15f3f10f537bfca14e2"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.320704 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerStarted","Data":"d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326444 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" containerID="cri-o://a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" gracePeriod=30 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.326548 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" containerID="cri-o://d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" gracePeriod=30 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333416 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0" exitCode=0 Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.333859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"80ceba888693469af3d53c546cb7c4eba0040a2f5c19424d7894edf743d991ac"} Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.353701 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.353672547 podStartE2EDuration="8.353672547s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:36:54.343588562 +0000 UTC m=+1425.170151383" watchObservedRunningTime="2026-01-28 18:36:54.353672547 +0000 UTC m=+1425.180235368" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.366306 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428547 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428683 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428739 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.428969 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.429070 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.429115 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") pod \"94d84421-da66-4847-bfcc-f2fc38d072e7\" (UID: \"94d84421-da66-4847-bfcc-f2fc38d072e7\") " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.430611 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.431382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs" (OuterVolumeSpecName: "logs") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.437392 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts" (OuterVolumeSpecName: "scripts") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.463038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb" (OuterVolumeSpecName: "kube-api-access-z22wb") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "kube-api-access-z22wb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.464573 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (OuterVolumeSpecName: "glance") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "pvc-a28b8b70-fd49-47a9-9731-34913060b77f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.501126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.504072 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.507068 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.507131 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} err="failed to get container status \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.507169 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.508215 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.508240 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} err="failed to get container status \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.508272 4985 scope.go:117] "RemoveContainer" containerID="f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.509481 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a"} err="failed to get container status \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": rpc error: code = NotFound desc = could not find container \"f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a\": container with ID starting with f6f60d43b4879c13b3dc23514b8f9117acad2a4f87a8fb2ecd97499ce2360e7a not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.509506 4985 scope.go:117] "RemoveContainer" containerID="2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.510995 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c"} err="failed to get container status \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": rpc error: code = NotFound desc = could not find container \"2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c\": container with ID starting with 2ec157e81df9abc3d446015fcd9ecb23e902554cc63bd302989e9233de33ef1c not found: ID does not exist" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.536691 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.536928 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z22wb\" (UniqueName: \"kubernetes.io/projected/94d84421-da66-4847-bfcc-f2fc38d072e7-kube-api-access-z22wb\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537004 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537097 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537228 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" " Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.537350 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/94d84421-da66-4847-bfcc-f2fc38d072e7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.558994 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data" (OuterVolumeSpecName: "config-data") pod "94d84421-da66-4847-bfcc-f2fc38d072e7" (UID: "94d84421-da66-4847-bfcc-f2fc38d072e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.580718 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.580868 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f") on node "crc" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.642969 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.643003 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94d84421-da66-4847-bfcc-f2fc38d072e7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.759396 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.772585 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.805866 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.807701 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.807751 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: E0128 18:36:54.807845 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.807856 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.808438 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-httpd" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.808473 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" containerName="glance-log" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.826844 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.844217 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.851613 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884629 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.884995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885024 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885102 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.885372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.968735 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988330 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988400 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988488 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988509 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988545 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.988608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.996522 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.998924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:54 crc kubenswrapper[4985]: I0128 18:36:54.999405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.000158 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.003230 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.008074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.011972 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.012015 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.021960 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.067414 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.199729 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.311749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94d84421-da66-4847-bfcc-f2fc38d072e7" path="/var/lib/kubelet/pods/94d84421-da66-4847-bfcc-f2fc38d072e7/volumes" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376893 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerID="d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" exitCode=0 Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376929 4985 generic.go:334] "Generic (PLEG): container finished" podID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerID="a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" exitCode=143 Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.376999 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.395767 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984"} Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.636791 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704172 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704270 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704486 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.704764 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") pod \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\" (UID: \"ff279d8d-4c4e-4bdc-a880-7a739d15999c\") " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.705627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs" (OuterVolumeSpecName: "logs") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.705867 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.736657 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k" (OuterVolumeSpecName: "kube-api-access-d4q8k") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "kube-api-access-d4q8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.760440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts" (OuterVolumeSpecName: "scripts") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.771154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (OuterVolumeSpecName: "glance") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "pvc-515c3b80-2464-4146-928c-cf9de6a379dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.800058 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808134 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808184 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" " Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808198 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ff279d8d-4c4e-4bdc-a880-7a739d15999c-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808206 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808217 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4q8k\" (UniqueName: \"kubernetes.io/projected/ff279d8d-4c4e-4bdc-a880-7a739d15999c-kube-api-access-d4q8k\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.808225 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.831959 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data" (OuterVolumeSpecName: "config-data") pod "ff279d8d-4c4e-4bdc-a880-7a739d15999c" (UID: "ff279d8d-4c4e-4bdc-a880-7a739d15999c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.838274 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.838414 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc") on node "crc" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.911679 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.911716 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff279d8d-4c4e-4bdc-a880-7a739d15999c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:36:55 crc kubenswrapper[4985]: I0128 18:36:55.924668 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.407275 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"ff279d8d-4c4e-4bdc-a880-7a739d15999c","Type":"ContainerDied","Data":"901d4da2ea774977403413c52d844a7d397bdd9df889717b5e5f413275ab1407"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.408380 4985 scope.go:117] "RemoveContainer" containerID="d9a7fbe77569a9cccca192f6b208ed4293873e8e329ca9372d198c908395de7f" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.408442 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.416919 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"43d735c182cbb81ec5017199eb78a2029759022896fdabfe1470a42d01bd6b7b"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.421000 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984" exitCode=0 Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.421130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984"} Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.510394 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.525883 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.536527 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: E0128 18:36:56.537232 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537312 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: E0128 18:36:56.537417 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537473 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537775 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-log" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.537864 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" containerName="glance-httpd" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.539167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.543332 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.543388 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.548689 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629834 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629856 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629935 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.629981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.630007 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.687769 4985 scope.go:117] "RemoveContainer" containerID="a3a6974dd2a2d5d592eec4b16a00f394ceced6b18c1c368fe6111cc253be6e71" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732291 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732333 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732371 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732394 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732428 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.732462 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.733581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.734213 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.739861 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.739895 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740008 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740512 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.740782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.741822 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.756438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.805892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:36:56 crc kubenswrapper[4985]: I0128 18:36:56.868956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.289978 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff279d8d-4c4e-4bdc-a880-7a739d15999c" path="/var/lib/kubelet/pods/ff279d8d-4c4e-4bdc-a880-7a739d15999c/volumes" Jan 28 18:36:57 crc kubenswrapper[4985]: W0128 18:36:57.453012 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod183853eb_591f_4859_9824_550b76c6f115.slice/crio-3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f WatchSource:0}: Error finding container 3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f: Status 404 returned error can't find the container with id 3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.454850 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.628492 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.750810 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:36:57 crc kubenswrapper[4985]: I0128 18:36:57.751269 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" containerID="cri-o://7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" gracePeriod=10 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.461420 4985 generic.go:334] "Generic (PLEG): container finished" podID="3d356801-0ed0-4343-87a9-29d23453d621" containerID="783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf" exitCode=0 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.461571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerDied","Data":"783d0e39177fc6f57441bbe975e76729d0ab9a44d7fd2176639c567f4c481bbf"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.469209 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa80be1e-734c-44bc-a957-137332ecd58a" containerID="7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" exitCode=0 Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.470155 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.473654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f"} Jan 28 18:36:58 crc kubenswrapper[4985]: I0128 18:36:58.477889 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f"} Jan 28 18:37:00 crc kubenswrapper[4985]: I0128 18:37:00.537038 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: connect: connection refused" Jan 28 18:37:01 crc kubenswrapper[4985]: E0128 18:37:01.196464 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:02 crc kubenswrapper[4985]: I0128 18:37:02.532453 4985 generic.go:334] "Generic (PLEG): container finished" podID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerID="12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337" exitCode=0 Jan 28 18:37:02 crc kubenswrapper[4985]: I0128 18:37:02.532537 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerDied","Data":"12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337"} Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.123378 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185072 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185157 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185216 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185361 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185495 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.185617 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") pod \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\" (UID: \"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e\") " Jan 28 18:37:05 crc kubenswrapper[4985]: E0128 18:37:05.187918 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.194316 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts" (OuterVolumeSpecName: "scripts") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.196445 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8" (OuterVolumeSpecName: "kube-api-access-qzfj8") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "kube-api-access-qzfj8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.203926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.212771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.224958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.237377 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data" (OuterVolumeSpecName: "config-data") pod "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" (UID: "32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289023 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289055 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289118 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289130 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfj8\" (UniqueName: \"kubernetes.io/projected/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-kube-api-access-qzfj8\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289143 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.289152 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-h27v9" event={"ID":"32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e","Type":"ContainerDied","Data":"f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db"} Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574443 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f594c9e7d10fa6181857cdca65cc9afd3cc6e7a2e73bb7a606297e4b8c0e60db" Jan 28 18:37:05 crc kubenswrapper[4985]: I0128 18:37:05.574484 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-h27v9" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.221067 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.230815 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-h27v9"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.323547 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:06 crc kubenswrapper[4985]: E0128 18:37:06.324122 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.324142 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.324527 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" containerName="keystone-bootstrap" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.325478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.327614 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.327838 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.328124 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.328317 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.330328 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.339003 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417325 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.417733 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.418009 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.418308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.519864 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.519927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520001 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520069 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.520098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.534116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.534372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.539287 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.539873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.540378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.540668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"keystone-bootstrap-hlgnm\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:06 crc kubenswrapper[4985]: I0128 18:37:06.641401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:07 crc kubenswrapper[4985]: I0128 18:37:07.300745 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e" path="/var/lib/kubelet/pods/32cfbc0d-6e0b-47b5-af3f-c6501af3dd3e/volumes" Jan 28 18:37:10 crc kubenswrapper[4985]: I0128 18:37:10.537344 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:11 crc kubenswrapper[4985]: E0128 18:37:11.524626 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba791a5a_08bb_4a97_a4e4_9b0e06bac324.slice/crio-conmon-236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:37:15 crc kubenswrapper[4985]: I0128 18:37:15.538134 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:15 crc kubenswrapper[4985]: I0128 18:37:15.539020 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:20 crc kubenswrapper[4985]: I0128 18:37:20.539459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.471889 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.472516 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad,Command:[/bin/opm],Args:[serve /extracted-catalog/catalog --cache-dir=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOMEMLIMIT,Value:30MiB,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{31457280 0} {} 30Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qll99,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-mbtp6_openshift-marketplace(1ebe025a-cece-4723-928f-b6649ea27040): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:22 crc kubenswrapper[4985]: E0128 18:37:22.474132 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.601055 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.736499 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.737781 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.737947 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.738554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.738828 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") pod \"fa80be1e-734c-44bc-a957-137332ecd58a\" (UID: \"fa80be1e-734c-44bc-a957-137332ecd58a\") " Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.744056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb" (OuterVolumeSpecName: "kube-api-access-xdwqb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "kube-api-access-xdwqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795366 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" event={"ID":"fa80be1e-734c-44bc-a957-137332ecd58a","Type":"ContainerDied","Data":"d7aa5495d851ceb3cfab59b851d20f52e6f54fcefbf4bc770429b29199850e87"} Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795459 4985 scope.go:117] "RemoveContainer" containerID="7bf8dbd2dcbc5b0a1855cc79c5970c28806a8595e366298bec9e80900e68f659" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.795656 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.800413 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config" (OuterVolumeSpecName: "config") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.803872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.808194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.816478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "fa80be1e-734c-44bc-a957-137332ecd58a" (UID: "fa80be1e-734c-44bc-a957-137332ecd58a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843237 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843278 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843310 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843321 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdwqb\" (UniqueName: \"kubernetes.io/projected/fa80be1e-734c-44bc-a957-137332ecd58a-kube-api-access-xdwqb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:22 crc kubenswrapper[4985]: I0128 18:37:22.843331 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/fa80be1e-734c-44bc-a957-137332ecd58a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.140953 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.152146 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-f4mq4"] Jan 28 18:37:23 crc kubenswrapper[4985]: I0128 18:37:23.277522 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" path="/var/lib/kubelet/pods/fa80be1e-734c-44bc-a957-137332ecd58a/volumes" Jan 28 18:37:25 crc kubenswrapper[4985]: I0128 18:37:25.540229 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-b8fbc5445-f4mq4" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.147:5353: i/o timeout" Jan 28 18:37:26 crc kubenswrapper[4985]: E0128 18:37:26.596665 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"\"" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.479463 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.486243 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-szgd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s8hs9_openstack(feecd29d-1d64-47f4-a1af-e634b7d87f3a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.491619 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s8hs9" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" Jan 28 18:37:28 crc kubenswrapper[4985]: E0128 18:37:28.892471 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-s8hs9" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.014867 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.015081 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nd8h59fh589h4h588h656h68ch87h586h58dhc7hb8h5f6h9dhdh9h585h67fh56ch5ch57dhcch5c7hd7h579hddh58ch77h5dh77h57fh57q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4s629,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(2d1d02ed-9b38-404a-8926-9d4aaf7bab57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.361670 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.362020 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8n5mf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-qjrfx_openstack(dda9fdbc-ce81-4e63-b32f-733379d893d4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.363204 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-qjrfx" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.394381 4985 scope.go:117] "RemoveContainer" containerID="b07a966b1eedec1e93ccdffea190010036fa22a709598fabaaf5909bac14f589" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.915584 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerStarted","Data":"38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.925641 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.931513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"d672a1cd2835bd532c59c1d89f245b7417d6804249dc7c63ead12ec5e0ccb77d"} Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.947324 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-8h4kr" podStartSLOduration=3.92623114 podStartE2EDuration="44.947300048s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:49.283165154 +0000 UTC m=+1420.109727975" lastFinishedPulling="2026-01-28 18:37:30.304234062 +0000 UTC m=+1461.130796883" observedRunningTime="2026-01-28 18:37:30.929752352 +0000 UTC m=+1461.756315173" watchObservedRunningTime="2026-01-28 18:37:30.947300048 +0000 UTC m=+1461.773862869" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.952038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerStarted","Data":"badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514"} Jan 28 18:37:30 crc kubenswrapper[4985]: E0128 18:37:30.952928 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified\\\"\"" pod="openstack/heat-db-sync-qjrfx" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.978659 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:37:30 crc kubenswrapper[4985]: W0128 18:37:30.980438 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a3199c2_6b1c_4a07_849d_cc92d372c5c3.slice/crio-77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea WatchSource:0}: Error finding container 77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea: Status 404 returned error can't find the container with id 77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.984580 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9w9wm" podStartSLOduration=3.965918191 podStartE2EDuration="44.98455715s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:49.330756058 +0000 UTC m=+1420.157318879" lastFinishedPulling="2026-01-28 18:37:30.349395017 +0000 UTC m=+1461.175957838" observedRunningTime="2026-01-28 18:37:30.969893106 +0000 UTC m=+1461.796455927" watchObservedRunningTime="2026-01-28 18:37:30.98455715 +0000 UTC m=+1461.811119971" Jan 28 18:37:30 crc kubenswrapper[4985]: I0128 18:37:30.993124 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.565658 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerStarted","Data":"c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.568995 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.572572 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd" exitCode=0 Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.572603 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.574361 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerStarted","Data":"77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea"} Jan 28 18:37:32 crc kubenswrapper[4985]: I0128 18:37:32.596110 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=38.596087026 podStartE2EDuration="38.596087026s" podCreationTimestamp="2026-01-28 18:36:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:32.594200703 +0000 UTC m=+1463.420763534" watchObservedRunningTime="2026-01-28 18:37:32.596087026 +0000 UTC m=+1463.422649847" Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.605425 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerStarted","Data":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.607520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerStarted","Data":"bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.610737 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"1bb403b36214d9dd666e2b32bc6b48e4b0145e97098046a0b40fa4f9fdd5bb47"} Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.638190 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=38.638160959 podStartE2EDuration="38.638160959s" podCreationTimestamp="2026-01-28 18:36:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:34.627296282 +0000 UTC m=+1465.453859103" watchObservedRunningTime="2026-01-28 18:37:34.638160959 +0000 UTC m=+1465.464723790" Jan 28 18:37:34 crc kubenswrapper[4985]: I0128 18:37:34.665189 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-hlgnm" podStartSLOduration=28.665164382 podStartE2EDuration="28.665164382s" podCreationTimestamp="2026-01-28 18:37:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:34.662211708 +0000 UTC m=+1465.488774529" watchObservedRunningTime="2026-01-28 18:37:34.665164382 +0000 UTC m=+1465.491727223" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.200583 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.200641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.294422 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.294523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.624704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:37:35 crc kubenswrapper[4985]: I0128 18:37:35.624780 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.636519 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerStarted","Data":"63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.637970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.641969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"3d356801-0ed0-4343-87a9-29d23453d621","Type":"ContainerStarted","Data":"0a11aa37babe5740860c5b2dd431728b72db2aeef53e5c3e5c4896ed88505ab1"} Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.659604 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8fg44" podStartSLOduration=4.166458743 podStartE2EDuration="45.659585648s" podCreationTimestamp="2026-01-28 18:36:51 +0000 UTC" firstStartedPulling="2026-01-28 18:36:54.36655398 +0000 UTC m=+1425.193116801" lastFinishedPulling="2026-01-28 18:37:35.859680885 +0000 UTC m=+1466.686243706" observedRunningTime="2026-01-28 18:37:36.655190864 +0000 UTC m=+1467.481753705" watchObservedRunningTime="2026-01-28 18:37:36.659585648 +0000 UTC m=+1467.486148469" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.693901 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=51.693876876 podStartE2EDuration="51.693876876s" podCreationTimestamp="2026-01-28 18:36:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:36.690488591 +0000 UTC m=+1467.517051432" watchObservedRunningTime="2026-01-28 18:37:36.693876876 +0000 UTC m=+1467.520439717" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.869981 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.870025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.910498 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:36 crc kubenswrapper[4985]: I0128 18:37:36.929174 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:37 crc kubenswrapper[4985]: I0128 18:37:37.655578 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:37 crc kubenswrapper[4985]: I0128 18:37:37.655824 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:39 crc kubenswrapper[4985]: I0128 18:37:39.678602 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:40 crc kubenswrapper[4985]: I0128 18:37:40.494101 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.661424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.661916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.722759 4985 generic.go:334] "Generic (PLEG): container finished" podID="f788adab-3912-43da-869e-2450d65b761f" containerID="38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c" exitCode=0 Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.722819 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerDied","Data":"38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c"} Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.729989 4985 generic.go:334] "Generic (PLEG): container finished" podID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerID="bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0" exitCode=0 Jan 28 18:37:42 crc kubenswrapper[4985]: I0128 18:37:42.730026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerDied","Data":"bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0"} Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.249791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.249887 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.260839 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.287468 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.287673 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.288294 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:37:43 crc kubenswrapper[4985]: I0128 18:37:43.730711 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:43 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:43 crc kubenswrapper[4985]: > Jan 28 18:37:44 crc kubenswrapper[4985]: I0128 18:37:44.752294 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerStarted","Data":"fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077"} Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.493902 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.500864 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.769789 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 18:37:45 crc kubenswrapper[4985]: I0128 18:37:45.791588 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-mbtp6" podStartSLOduration=5.351058306 podStartE2EDuration="55.791563516s" podCreationTimestamp="2026-01-28 18:36:50 +0000 UTC" firstStartedPulling="2026-01-28 18:36:53.267315486 +0000 UTC m=+1424.093878307" lastFinishedPulling="2026-01-28 18:37:43.707820696 +0000 UTC m=+1474.534383517" observedRunningTime="2026-01-28 18:37:45.78923752 +0000 UTC m=+1476.615800341" watchObservedRunningTime="2026-01-28 18:37:45.791563516 +0000 UTC m=+1476.618126347" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.070498 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.082081 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216293 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216632 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216698 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216739 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216772 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.216800 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217174 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217244 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") pod \"f788adab-3912-43da-869e-2450d65b761f\" (UID: \"f788adab-3912-43da-869e-2450d65b761f\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217427 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") pod \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\" (UID: \"4a3199c2-6b1c-4a07-849d-cc92d372c5c3\") " Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.217978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs" (OuterVolumeSpecName: "logs") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.232768 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.232810 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts" (OuterVolumeSpecName: "scripts") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.235243 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.236226 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts" (OuterVolumeSpecName: "scripts") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.236546 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8" (OuterVolumeSpecName: "kube-api-access-wmsb8") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "kube-api-access-wmsb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.247647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d" (OuterVolumeSpecName: "kube-api-access-k5n2d") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "kube-api-access-k5n2d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319880 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f788adab-3912-43da-869e-2450d65b761f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319912 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319926 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k5n2d\" (UniqueName: \"kubernetes.io/projected/f788adab-3912-43da-869e-2450d65b761f-kube-api-access-k5n2d\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319942 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmsb8\" (UniqueName: \"kubernetes.io/projected/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-kube-api-access-wmsb8\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319953 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319963 4985 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.319975 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.320173 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.354103 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data" (OuterVolumeSpecName: "config-data") pod "4a3199c2-6b1c-4a07-849d-cc92d372c5c3" (UID: "4a3199c2-6b1c-4a07-849d-cc92d372c5c3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.366405 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data" (OuterVolumeSpecName: "config-data") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.374464 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f788adab-3912-43da-869e-2450d65b761f" (UID: "f788adab-3912-43da-869e-2450d65b761f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.422046 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424597 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4a3199c2-6b1c-4a07-849d-cc92d372c5c3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424627 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.424637 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f788adab-3912-43da-869e-2450d65b761f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.776329 4985 generic.go:334] "Generic (PLEG): container finished" podID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerID="badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514" exitCode=0 Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.776402 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerDied","Data":"badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779194 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-8h4kr" event={"ID":"f788adab-3912-43da-869e-2450d65b761f","Type":"ContainerDied","Data":"a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779241 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3c254f828427ba506d4802902a1b02512f0a07f8294c8db3817864021b8fd0c" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.779202 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-8h4kr" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.781158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerStarted","Data":"d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-hlgnm" event={"ID":"4a3199c2-6b1c-4a07-849d-cc92d372c5c3","Type":"ContainerDied","Data":"77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783688 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77878eeec63482b3f2187ac8aabd1b1217827902e6e3f40bc3b8ec22d896f2ea" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.783737 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-hlgnm" Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.790070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4"} Jan 28 18:37:46 crc kubenswrapper[4985]: I0128 18:37:46.840551 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-qjrfx" podStartSLOduration=3.285582642 podStartE2EDuration="1m0.840531229s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:48.383499235 +0000 UTC m=+1419.210062056" lastFinishedPulling="2026-01-28 18:37:45.938447822 +0000 UTC m=+1476.765010643" observedRunningTime="2026-01-28 18:37:46.837233516 +0000 UTC m=+1477.663796347" watchObservedRunningTime="2026-01-28 18:37:46.840531229 +0000 UTC m=+1477.667094050" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311073 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311559 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="init" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311575 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="init" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311588 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311593 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311605 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311611 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: E0128 18:37:47.311639 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311644 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311826 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa80be1e-734c-44bc-a957-137332ecd58a" containerName="dnsmasq-dns" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311841 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" containerName="keystone-bootstrap" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.311862 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f788adab-3912-43da-869e-2450d65b761f" containerName="placement-db-sync" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.313017 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.319680 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320423 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-fpld6" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320629 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.320806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.321043 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.337774 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.339286 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.346747 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.346970 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347122 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347769 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347857 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-g7p4d" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.347979 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.355090 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.367740 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473769 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473912 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473980 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.473995 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474021 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474038 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.474239 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.576616 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577033 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577154 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577806 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.577962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578189 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578444 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.578686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.580395 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-logs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-credential-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584392 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-public-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.584873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-scripts\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585038 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-combined-ca-bundle\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-config-data\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585175 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-config-data\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.585597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-public-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.589222 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-scripts\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.589601 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-combined-ca-bundle\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.591915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-internal-tls-certs\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.593104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-fernet-keys\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.594337 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-internal-tls-certs\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.597217 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m5nj2\" (UniqueName: \"kubernetes.io/projected/cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1-kube-api-access-m5nj2\") pod \"placement-848676699d-9lbcr\" (UID: \"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1\") " pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.603052 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w9gw\" (UniqueName: \"kubernetes.io/projected/d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b-kube-api-access-6w9gw\") pod \"keystone-77c7879f98-bcrvp\" (UID: \"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b\") " pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.658320 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.671760 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:47 crc kubenswrapper[4985]: I0128 18:37:47.824674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerStarted","Data":"ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.463513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.488624 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s8hs9" podStartSLOduration=5.403375852 podStartE2EDuration="1m2.488603968s" podCreationTimestamp="2026-01-28 18:36:46 +0000 UTC" firstStartedPulling="2026-01-28 18:36:48.851331313 +0000 UTC m=+1419.677894134" lastFinishedPulling="2026-01-28 18:37:45.936559429 +0000 UTC m=+1476.763122250" observedRunningTime="2026-01-28 18:37:47.856236945 +0000 UTC m=+1478.682799766" watchObservedRunningTime="2026-01-28 18:37:48.488603968 +0000 UTC m=+1479.315166789" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.570234 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-77c7879f98-bcrvp"] Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.586591 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-848676699d-9lbcr"] Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.608581 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.609619 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.609838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") pod \"2ba5eedf-14b8-45ce-b738-e41a6daff299\" (UID: \"2ba5eedf-14b8-45ce-b738-e41a6daff299\") " Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.614908 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.618875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh" (OuterVolumeSpecName: "kube-api-access-9lcxh") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "kube-api-access-9lcxh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.650394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ba5eedf-14b8-45ce-b738-e41a6daff299" (UID: "2ba5eedf-14b8-45ce-b738-e41a6daff299"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713357 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713406 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lcxh\" (UniqueName: \"kubernetes.io/projected/2ba5eedf-14b8-45ce-b738-e41a6daff299-kube-api-access-9lcxh\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.713419 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ba5eedf-14b8-45ce-b738-e41a6daff299-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.892438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c7879f98-bcrvp" event={"ID":"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b","Type":"ContainerStarted","Data":"6f4553e8c8e44fd69834b780e370098e87fb1e04fc10ff7cc16b7301aa8daf3a"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916521 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9w9wm" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9w9wm" event={"ID":"2ba5eedf-14b8-45ce-b738-e41a6daff299","Type":"ContainerDied","Data":"d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a"} Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.916646 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d797c3ffe3dba6a95e4e6284ce4ebd9bc07a285808da1bdf5575d32b4671bc8a" Jan 28 18:37:48 crc kubenswrapper[4985]: I0128 18:37:48.918305 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"ac23c57e002cb7459b93a282e6b14ac22cc7d6f52a2f2c5a143106c014002033"} Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063026 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:49 crc kubenswrapper[4985]: E0128 18:37:49.063650 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063665 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.063913 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" containerName="barbican-db-sync" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.089916 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.096720 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.097345 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.097757 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-fl96f" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.235658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.235919 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.236476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.251922 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.257847 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.260356 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338862 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338942 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338965 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.338986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339010 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.339175 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.346606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-logs\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.347156 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-combined-ca-bundle\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.362844 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.363118 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.363131 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.376227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data-custom\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.382073 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.382735 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5z2vf\" (UniqueName: \"kubernetes.io/projected/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-kube-api-access-5z2vf\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.393400 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4b18150-cbd6-4c6f-a28b-8c66b1e875f2-config-data\") pod \"barbican-keystone-listener-6cc6bcfccd-rh55k\" (UID: \"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2\") " pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.416118 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441449 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441582 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.441662 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.450749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.453406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.458791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.459284 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d885ddad-ecc9-4b73-ad9e-9da819f95107-logs\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.459612 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.468020 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.469019 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.513672 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-combined-ca-bundle\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.542773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhhkk\" (UniqueName: \"kubernetes.io/projected/d885ddad-ecc9-4b73-ad9e-9da819f95107-kube-api-access-xhhkk\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.543297 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data-custom\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.547020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d885ddad-ecc9-4b73-ad9e-9da819f95107-config-data\") pod \"barbican-worker-6c84c9469f-9xntt\" (UID: \"d885ddad-ecc9-4b73-ad9e-9da819f95107\") " pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560911 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.560972 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561022 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561067 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561085 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561127 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.561229 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.593734 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-6c84c9469f-9xntt" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664929 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.664995 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665070 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665090 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665155 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665273 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.665325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.666415 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.666517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.667103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.667785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.668710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.672627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.673496 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.673748 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.674428 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.692771 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"barbican-api-59699bb574-kg5jx\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.700955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"dnsmasq-dns-7c67bffd47-2whmk\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.757948 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:49 crc kubenswrapper[4985]: I0128 18:37:49.780377 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.009356 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-77c7879f98-bcrvp" event={"ID":"d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b","Type":"ContainerStarted","Data":"69a1467b553a6c6558576781ca2b4d8370bd6677cad738b1106e12f17507729c"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.009789 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.046875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"542eb0db0cbf56f068474f29f6fc77fe5b6a9c54b8c0b18c390c937adb6c8897"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.046921 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-848676699d-9lbcr" event={"ID":"cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1","Type":"ContainerStarted","Data":"7a083ab4004f72bbdd409db978d2a2bb717e0d1cc28527fe9e0320b124be70ad"} Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.049386 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.055374 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.159286 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-77c7879f98-bcrvp" podStartSLOduration=3.159235554 podStartE2EDuration="3.159235554s" podCreationTimestamp="2026-01-28 18:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:50.076838637 +0000 UTC m=+1480.903401468" watchObservedRunningTime="2026-01-28 18:37:50.159235554 +0000 UTC m=+1480.985798375" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.238516 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-848676699d-9lbcr" podStartSLOduration=3.238493441 podStartE2EDuration="3.238493441s" podCreationTimestamp="2026-01-28 18:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:50.198916594 +0000 UTC m=+1481.025479435" watchObservedRunningTime="2026-01-28 18:37:50.238493441 +0000 UTC m=+1481.065056262" Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.299461 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-6cc6bcfccd-rh55k"] Jan 28 18:37:50 crc kubenswrapper[4985]: W0128 18:37:50.375474 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf4b18150_cbd6_4c6f_a28b_8c66b1e875f2.slice/crio-0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533 WatchSource:0}: Error finding container 0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533: Status 404 returned error can't find the container with id 0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533 Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.634918 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-6c84c9469f-9xntt"] Jan 28 18:37:50 crc kubenswrapper[4985]: W0128 18:37:50.643413 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd885ddad_ecc9_4b73_ad9e_9da819f95107.slice/crio-898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04 WatchSource:0}: Error finding container 898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04: Status 404 returned error can't find the container with id 898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04 Jan 28 18:37:50 crc kubenswrapper[4985]: I0128 18:37:50.916893 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.030358 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.035335 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.035394 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.059500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"0deb2fa615be711bba18d5f5e24ddaf749483a93a2d2bce21ee2afa867b80533"} Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.061642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"898a9604b4b483a6d2263993a2bdd40850eac63f8b7f263682de22c5e6527f04"} Jan 28 18:37:51 crc kubenswrapper[4985]: I0128 18:37:51.063801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerStarted","Data":"dd0880e0b96ac3a23f885b549586af18ca3a6b0027c6f034c1105c8d228a817a"} Jan 28 18:37:51 crc kubenswrapper[4985]: W0128 18:37:51.088498 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod523590c1_de57_4248_aa7f_2c52024d649e.slice/crio-b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5 WatchSource:0}: Error finding container b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5: Status 404 returned error can't find the container with id b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5 Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.077262 4985 generic.go:334] "Generic (PLEG): container finished" podID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerID="e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8" exitCode=0 Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.077458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.082627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.082680 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5"} Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.131173 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:52 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:52 crc kubenswrapper[4985]: > Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.621169 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.623441 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.626849 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.632510 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659749 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659880 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659929 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.659970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660089 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660123 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.660218 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.665264 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.761784 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762050 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762179 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762280 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.762314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.776071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04b28283-6f65-478e-952d-f965423f413e-logs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.787334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data-custom\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.788458 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p8wl\" (UniqueName: \"kubernetes.io/projected/04b28283-6f65-478e-952d-f965423f413e-kube-api-access-5p8wl\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.789391 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-config-data\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.789473 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-public-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.790624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-internal-tls-certs\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.800730 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04b28283-6f65-478e-952d-f965423f413e-combined-ca-bundle\") pod \"barbican-api-668ffb7f9d-shvfm\" (UID: \"04b28283-6f65-478e-952d-f965423f413e\") " pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:52 crc kubenswrapper[4985]: I0128 18:37:52.984037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.115423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerStarted","Data":"c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71"} Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.115574 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.118757 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerStarted","Data":"2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6"} Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.119360 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.119722 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.149034 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podStartSLOduration=4.149016443 podStartE2EDuration="4.149016443s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:53.137692933 +0000 UTC m=+1483.964255754" watchObservedRunningTime="2026-01-28 18:37:53.149016443 +0000 UTC m=+1483.975579264" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.176305 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-59699bb574-kg5jx" podStartSLOduration=4.176289013 podStartE2EDuration="4.176289013s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:37:53.170435387 +0000 UTC m=+1483.996998218" watchObservedRunningTime="2026-01-28 18:37:53.176289013 +0000 UTC m=+1484.002851834" Jan 28 18:37:53 crc kubenswrapper[4985]: I0128 18:37:53.743399 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:37:53 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:37:53 crc kubenswrapper[4985]: > Jan 28 18:37:54 crc kubenswrapper[4985]: I0128 18:37:54.775915 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-668ffb7f9d-shvfm"] Jan 28 18:37:55 crc kubenswrapper[4985]: I0128 18:37:55.141092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"65d032df38073e7eed22de53eed520ab01274bb31a016414dd7747a7dc134f9f"} Jan 28 18:37:55 crc kubenswrapper[4985]: I0128 18:37:55.144559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"b187f34b7b0c1a993d79520e94dd72989fc4652080d3971e8bb237cf1a5f5254"} Jan 28 18:37:58 crc kubenswrapper[4985]: I0128 18:37:58.182937 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"1450d3d2d780e38c895e0250be3018badb615c82f768d6a788516b52de14c5ca"} Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.760083 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.868567 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:37:59 crc kubenswrapper[4985]: I0128 18:37:59.868809 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" containerID="cri-o://16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" gracePeriod=10 Jan 28 18:38:00 crc kubenswrapper[4985]: I0128 18:38:00.223357 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerID="16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" exitCode=0 Jan 28 18:38:00 crc kubenswrapper[4985]: I0128 18:38:00.223492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e"} Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" event={"ID":"8ab3789a-5136-46f9-94bb-ab43720d0723","Type":"ContainerDied","Data":"bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199"} Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250615 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.250976 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278079 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278225 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278417 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278526 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.278553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") pod \"8ab3789a-5136-46f9-94bb-ab43720d0723\" (UID: \"8ab3789a-5136-46f9-94bb-ab43720d0723\") " Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.290442 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv" (OuterVolumeSpecName: "kube-api-access-g6nkv") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "kube-api-access-g6nkv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.383473 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6nkv\" (UniqueName: \"kubernetes.io/projected/8ab3789a-5136-46f9-94bb-ab43720d0723-kube-api-access-g6nkv\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.544196 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.547650 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.556471 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config" (OuterVolumeSpecName: "config") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.570414 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.576152 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "8ab3789a-5136-46f9-94bb-ab43720d0723" (UID: "8ab3789a-5136-46f9-94bb-ab43720d0723"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595626 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595672 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595693 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595704 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: I0128 18:38:01.595716 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/8ab3789a-5136-46f9-94bb-ab43720d0723-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:01 crc kubenswrapper[4985]: E0128 18:38:01.660855 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.212952 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:02 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:02 crc kubenswrapper[4985]: > Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.261804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" event={"ID":"f4b18150-cbd6-4c6f-a28b-8c66b1e875f2","Type":"ContainerStarted","Data":"f17b4f1c899896446fc4d315cea6eb1314dd9bdda7a98f219356bcd0896588d7"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.264946 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerStarted","Data":"ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265161 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" containerID="cri-o://e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265292 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265333 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" containerID="cri-o://ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.265373 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" containerID="cri-o://1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" gracePeriod=30 Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274505 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"bc5e99b080cb28b67a368202056e01128443f9359cda4cba67410852e4d84ba9"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-668ffb7f9d-shvfm" event={"ID":"04b28283-6f65-478e-952d-f965423f413e","Type":"ContainerStarted","Data":"5fbbc6c10659230bfc586124b91a3a8cec90cfd9be6b10949193dfdf305e6c6a"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.274769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.288540 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-6c84c9469f-9xntt" event={"ID":"d885ddad-ecc9-4b73-ad9e-9da819f95107","Type":"ContainerStarted","Data":"6beae4c3610560067d7f82af1bd5645b5653e1d0ddb60018480cdd6a1a8157c8"} Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.288610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-56df8fb6b7-zbf7x" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.323782 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-6cc6bcfccd-rh55k" podStartSLOduration=9.527566532 podStartE2EDuration="13.323754347s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="2026-01-28 18:37:50.398660903 +0000 UTC m=+1481.225223714" lastFinishedPulling="2026-01-28 18:37:54.194848708 +0000 UTC m=+1485.021411529" observedRunningTime="2026-01-28 18:38:02.277911332 +0000 UTC m=+1493.104474153" watchObservedRunningTime="2026-01-28 18:38:02.323754347 +0000 UTC m=+1493.150317168" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.443491 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-668ffb7f9d-shvfm" podStartSLOduration=10.443468056 podStartE2EDuration="10.443468056s" podCreationTimestamp="2026-01-28 18:37:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:02.332835123 +0000 UTC m=+1493.159397944" watchObservedRunningTime="2026-01-28 18:38:02.443468056 +0000 UTC m=+1493.270030877" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.452789 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-6c84c9469f-9xntt" podStartSLOduration=9.916963425 podStartE2EDuration="13.452771519s" podCreationTimestamp="2026-01-28 18:37:49 +0000 UTC" firstStartedPulling="2026-01-28 18:37:50.661972107 +0000 UTC m=+1481.488534928" lastFinishedPulling="2026-01-28 18:37:54.197780201 +0000 UTC m=+1485.024343022" observedRunningTime="2026-01-28 18:38:02.352994292 +0000 UTC m=+1493.179557113" watchObservedRunningTime="2026-01-28 18:38:02.452771519 +0000 UTC m=+1493.279334340" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.489951 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.501295 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-56df8fb6b7-zbf7x"] Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.655604 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:02 crc kubenswrapper[4985]: E0128 18:38:02.743286 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice/crio-bb6124dbab624d93a758012ac4a116c2df0bf0ef9b2b7c1829d183f1fd72b199\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8ab3789a_5136_46f9_94bb_ab43720d0723.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-conmon-1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d1d02ed_9b38_404a_8926_9d4aaf7bab57.slice/crio-conmon-ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:38:02 crc kubenswrapper[4985]: I0128 18:38:02.948116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.284600 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" path="/var/lib/kubelet/pods/8ab3789a-5136-46f9-94bb-ab43720d0723/volumes" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317437 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" exitCode=0 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317473 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" exitCode=2 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317481 4985 generic.go:334] "Generic (PLEG): container finished" podID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerID="e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" exitCode=0 Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317523 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317575 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.317587 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e"} Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.721224 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:03 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:03 crc kubenswrapper[4985]: > Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.866464 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.947908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948273 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948407 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948641 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.948780 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949178 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.949385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") pod \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\" (UID: \"2d1d02ed-9b38-404a-8926-9d4aaf7bab57\") " Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.951268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.953062 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.953146 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:03 crc kubenswrapper[4985]: I0128 18:38:03.993615 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629" (OuterVolumeSpecName: "kube-api-access-4s629") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "kube-api-access-4s629". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.012432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts" (OuterVolumeSpecName: "scripts") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.034930 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056875 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056949 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s629\" (UniqueName: \"kubernetes.io/projected/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-kube-api-access-4s629\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.056969 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.062752 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.075406 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data" (OuterVolumeSpecName: "config-data") pod "2d1d02ed-9b38-404a-8926-9d4aaf7bab57" (UID: "2d1d02ed-9b38-404a-8926-9d4aaf7bab57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.159081 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.159124 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d1d02ed-9b38-404a-8926-9d4aaf7bab57-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.330523 4985 generic.go:334] "Generic (PLEG): container finished" podID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerID="d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce" exitCode=0 Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.330597 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerDied","Data":"d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce"} Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d1d02ed-9b38-404a-8926-9d4aaf7bab57","Type":"ContainerDied","Data":"3ae1387fe5106b01146f4fc344eb6732aa4c0dba8627d7a78e6bf597fe2799b6"} Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337738 4985 scope.go:117] "RemoveContainer" containerID="ef108865030663cb278d34e2c603ba0cf56627dcb1565e258e211dd0f345f1e6" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.337933 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.371642 4985 scope.go:117] "RemoveContainer" containerID="1fe5f92902fe305b4cccf72044e768fdbb447b14f8f898e1c916ebc9978069b4" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.422358 4985 scope.go:117] "RemoveContainer" containerID="e7c5bbe824f52654b03b71b358549ed805dc4f0a1f3bd28f0c806b7f6c63294e" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.441404 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.461658 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475184 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475660 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475683 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475698 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475706 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475728 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475736 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475749 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="init" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475756 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="init" Jan 28 18:38:04 crc kubenswrapper[4985]: E0128 18:38:04.475779 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.475785 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476467 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="ceilometer-notification-agent" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476491 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="proxy-httpd" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476507 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" containerName="sg-core" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.476519 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ab3789a-5136-46f9-94bb-ab43720d0723" containerName="dnsmasq-dns" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.478974 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.482486 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.482649 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.496185 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567452 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567616 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.567798 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669428 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669489 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669542 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.669563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.670103 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.670663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.674204 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.674519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.675367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.675955 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.697628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"ceilometer-0\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " pod="openstack/ceilometer-0" Jan 28 18:38:04 crc kubenswrapper[4985]: I0128 18:38:04.809858 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.295908 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d1d02ed-9b38-404a-8926-9d4aaf7bab57" path="/var/lib/kubelet/pods/2d1d02ed-9b38-404a-8926-9d4aaf7bab57/volumes" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.297337 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.350516 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"24f37b343823af87929d4be979bf978ca07c8b7fe426ee346d1a058ab94e67be"} Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.772605 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910148 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910609 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.910678 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") pod \"dda9fdbc-ce81-4e63-b32f-733379d893d4\" (UID: \"dda9fdbc-ce81-4e63-b32f-733379d893d4\") " Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.915150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf" (OuterVolumeSpecName: "kube-api-access-8n5mf") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "kube-api-access-8n5mf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:05 crc kubenswrapper[4985]: I0128 18:38:05.953154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.002627 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data" (OuterVolumeSpecName: "config-data") pod "dda9fdbc-ce81-4e63-b32f-733379d893d4" (UID: "dda9fdbc-ce81-4e63-b32f-733379d893d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014878 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014930 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8n5mf\" (UniqueName: \"kubernetes.io/projected/dda9fdbc-ce81-4e63-b32f-733379d893d4-kube-api-access-8n5mf\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.014946 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dda9fdbc-ce81-4e63-b32f-733379d893d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.375266 4985 generic.go:334] "Generic (PLEG): container finished" podID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerID="461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1" exitCode=0 Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.375293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerDied","Data":"461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.378084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.379478 4985 generic.go:334] "Generic (PLEG): container finished" podID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerID="ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06" exitCode=0 Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.379527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerDied","Data":"ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381242 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-qjrfx" event={"ID":"dda9fdbc-ce81-4e63-b32f-733379d893d4","Type":"ContainerDied","Data":"29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172"} Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381298 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29e494db6715043d1dade09c32717d476d44c5754f6d809807167b425de76172" Jan 28 18:38:06 crc kubenswrapper[4985]: I0128 18:38:06.381358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-qjrfx" Jan 28 18:38:07 crc kubenswrapper[4985]: I0128 18:38:07.393918 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed"} Jan 28 18:38:07 crc kubenswrapper[4985]: I0128 18:38:07.969842 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.010032 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080260 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080318 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.080439 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") pod \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\" (UID: \"feecd29d-1d64-47f4-a1af-e634b7d87f3a\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.083358 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.088377 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.091366 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts" (OuterVolumeSpecName: "scripts") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.091544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4" (OuterVolumeSpecName: "kube-api-access-szgd4") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "kube-api-access-szgd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.136484 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.155787 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data" (OuterVolumeSpecName: "config-data") pod "feecd29d-1d64-47f4-a1af-e634b7d87f3a" (UID: "feecd29d-1d64-47f4-a1af-e634b7d87f3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182564 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182789 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.182827 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") pod \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\" (UID: \"b64f0d6c-55b7-4eac-85f6-e78b581cbebc\") " Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184428 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184676 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-szgd4\" (UniqueName: \"kubernetes.io/projected/feecd29d-1d64-47f4-a1af-e634b7d87f3a-kube-api-access-szgd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184691 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/feecd29d-1d64-47f4-a1af-e634b7d87f3a-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184699 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184707 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.184715 4985 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/feecd29d-1d64-47f4-a1af-e634b7d87f3a-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.188575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs" (OuterVolumeSpecName: "kube-api-access-kx7rs") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "kube-api-access-kx7rs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.215511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config" (OuterVolumeSpecName: "config") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.218374 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b64f0d6c-55b7-4eac-85f6-e78b581cbebc" (UID: "b64f0d6c-55b7-4eac-85f6-e78b581cbebc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287367 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287405 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kx7rs\" (UniqueName: \"kubernetes.io/projected/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-kube-api-access-kx7rs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.287430 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b64f0d6c-55b7-4eac-85f6-e78b581cbebc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-dwwcb" event={"ID":"b64f0d6c-55b7-4eac-85f6-e78b581cbebc","Type":"ContainerDied","Data":"94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404727 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e9ea7881e540161402fe0b16a42aca0004dbafe8de2259a73da5d4a537b2b5" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.404795 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-dwwcb" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.407318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s8hs9" event={"ID":"feecd29d-1d64-47f4-a1af-e634b7d87f3a","Type":"ContainerDied","Data":"1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829"} Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408537 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1b5ced815ed25f34faa5ff921cdb8509638b39e75db318b0ce2521c26d4d3829" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.408612 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s8hs9" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.639523 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640032 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640049 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640088 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640094 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.640110 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640117 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640324 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" containerName="neutron-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640349 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" containerName="cinder-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.640358 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" containerName="heat-db-sync" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.641582 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.672548 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.782511 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.785013 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802771 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.802945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.803091 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.810398 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.812812 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.815881 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816083 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-r9qmf" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816281 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.816452 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824075 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.830435 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824416 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824515 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.824567 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-cnbtl" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.880905 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912810 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912879 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.912983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913029 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913213 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913298 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913378 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913424 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913471 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913526 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913572 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913638 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913664 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.913726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.914584 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.934358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.938783 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.940040 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.943993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.956159 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:08 crc kubenswrapper[4985]: E0128 18:38:08.957320 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-csvjk], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" podUID="deec912d-352f-4d4a-9259-cf645aab16da" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.981371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"dnsmasq-dns-848cf88cfc-rbz5c\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.987463 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:08 crc kubenswrapper[4985]: I0128 18:38:08.990084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016174 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016238 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016296 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016360 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016411 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016454 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016508 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.016558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.029553 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.033760 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.045994 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.048703 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.048748 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.049913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.066116 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.066865 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.068892 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.069869 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.082010 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"neutron-d8b8b566d-89qjp\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.093917 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"cinder-scheduler-0\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.128793 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.130538 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.139977 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.140135 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141258 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141476 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.141886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.150334 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.152837 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.157529 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.159840 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.172285 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246894 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246950 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.246986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.247065 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.248757 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251641 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251705 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251730 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.251901 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.252464 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.252600 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.254396 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.254959 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.255263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.255471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.275858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"dnsmasq-dns-6578955fd5-j67tm\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.344667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363506 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363685 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363785 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.363847 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.371418 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.371474 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.390585 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.392529 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.393533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.417550 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.418486 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.450581 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.481375 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.568891 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.569160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.572985 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.573942 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.574383 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") pod \"deec912d-352f-4d4a-9259-cf645aab16da\" (UID: \"deec912d-352f-4d4a-9259-cf645aab16da\") " Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.574421 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575450 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575467 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.575487 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.576514 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.576728 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config" (OuterVolumeSpecName: "config") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.589083 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk" (OuterVolumeSpecName: "kube-api-access-csvjk") pod "deec912d-352f-4d4a-9259-cf645aab16da" (UID: "deec912d-352f-4d4a-9259-cf645aab16da"). InnerVolumeSpecName "kube-api-access-csvjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.656667 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678786 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678823 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-csvjk\" (UniqueName: \"kubernetes.io/projected/deec912d-352f-4d4a-9259-cf645aab16da-kube-api-access-csvjk\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:09 crc kubenswrapper[4985]: I0128 18:38:09.678833 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/deec912d-352f-4d4a-9259-cf645aab16da-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.114522 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.131691 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:10 crc kubenswrapper[4985]: W0128 18:38:10.140446 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc3a8f8a9_e888_4754_94da_0ef0e972c995.slice/crio-2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc WatchSource:0}: Error finding container 2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc: Status 404 returned error can't find the container with id 2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.303018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:10 crc kubenswrapper[4985]: W0128 18:38:10.304393 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda366d8d5_30e8_4d85_aadc_af770270ffcf.slice/crio-c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678 WatchSource:0}: Error finding container c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678: Status 404 returned error can't find the container with id c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678 Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.383806 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.460722 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"c9f68ac609dd2f41623830c63a61e02d6c06dc430a7f02a9f5349b8bf758436d"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.461648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"31388f0bf206620f4149df49b7f517c8ef12fb63e7bf921a506b07d05954b8ce"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.462480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.463362 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-848cf88cfc-rbz5c" Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.463352 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678"} Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.530099 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:10 crc kubenswrapper[4985]: I0128 18:38:10.537163 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-848cf88cfc-rbz5c"] Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.333281 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deec912d-352f-4d4a-9259-cf645aab16da" path="/var/lib/kubelet/pods/deec912d-352f-4d4a-9259-cf645aab16da/volumes" Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.342820 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.516161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerStarted","Data":"eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.527484 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.541121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806"} Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.581824 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:11 crc kubenswrapper[4985]: I0128 18:38:11.911514 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-668ffb7f9d-shvfm" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.993853 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.994087 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" containerID="cri-o://12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" gracePeriod=30 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:11.994601 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" containerID="cri-o://2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" gracePeriod=30 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.157243 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:12 crc kubenswrapper[4985]: > Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.553284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.556084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerStarted","Data":"f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.557827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.560215 4985 generic.go:334] "Generic (PLEG): container finished" podID="523590c1-de57-4248-aa7f-2c52024d649e" containerID="12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" exitCode=143 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.560284 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.562543 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerID="c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806" exitCode=0 Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.563031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806"} Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.563297 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.583817 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d8b8b566d-89qjp" podStartSLOduration=4.583798911 podStartE2EDuration="4.583798911s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:12.576083133 +0000 UTC m=+1503.402645964" watchObservedRunningTime="2026-01-28 18:38:12.583798911 +0000 UTC m=+1503.410361732" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.608749 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.123927336 podStartE2EDuration="8.608718075s" podCreationTimestamp="2026-01-28 18:38:04 +0000 UTC" firstStartedPulling="2026-01-28 18:38:05.308140052 +0000 UTC m=+1496.134702873" lastFinishedPulling="2026-01-28 18:38:10.792930791 +0000 UTC m=+1501.619493612" observedRunningTime="2026-01-28 18:38:12.601759478 +0000 UTC m=+1503.428322309" watchObservedRunningTime="2026-01-28 18:38:12.608718075 +0000 UTC m=+1503.435280896" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.722686 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.778801 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:12 crc kubenswrapper[4985]: I0128 18:38:12.983910 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:14 crc kubenswrapper[4985]: I0128 18:38:14.591694 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8fg44" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" containerID="cri-o://63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" gracePeriod=2 Jan 28 18:38:14 crc kubenswrapper[4985]: I0128 18:38:14.592335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerStarted","Data":"911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.437089 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.439650 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.442068 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.444372 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.448265 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.494740 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:60860->10.217.0.200:9311: read: connection reset by peer" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.495039 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-59699bb574-kg5jx" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.200:9311/healthcheck\": read tcp 10.217.0.2:60868->10.217.0.200:9311: read: connection reset by peer" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578703 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578748 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578846 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.578925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.579183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613354 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerStarted","Data":"0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613507 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" containerID="cri-o://b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" gracePeriod=30 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.613774 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.614059 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" containerID="cri-o://0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" gracePeriod=30 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.621604 4985 generic.go:334] "Generic (PLEG): container finished" podID="493defdf-169c-4278-b370-69068ec73439" containerID="63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" exitCode=0 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.621686 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625267 4985 generic.go:334] "Generic (PLEG): container finished" podID="523590c1-de57-4248-aa7f-2c52024d649e" containerID="2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" exitCode=0 Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6"} Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.625513 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.646801 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.646780176 podStartE2EDuration="6.646780176s" podCreationTimestamp="2026-01-28 18:38:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:15.641639881 +0000 UTC m=+1506.468202702" watchObservedRunningTime="2026-01-28 18:38:15.646780176 +0000 UTC m=+1506.473342997" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.675289 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podStartSLOduration=7.67526922 podStartE2EDuration="7.67526922s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:15.671322439 +0000 UTC m=+1506.497885270" watchObservedRunningTime="2026-01-28 18:38:15.67526922 +0000 UTC m=+1506.501832041" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681798 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681839 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681879 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681913 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.681934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.682021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.682071 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-combined-ca-bundle\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-internal-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.688974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.689675 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-httpd-config\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.689890 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-public-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.706165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-ovndb-tls-certs\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.715844 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xhwr\" (UniqueName: \"kubernetes.io/projected/2177b5b3-0121-4ff8-93dd-2f9ef36560f4-kube-api-access-9xhwr\") pod \"neutron-f49f9645f-bs9wr\" (UID: \"2177b5b3-0121-4ff8-93dd-2f9ef36560f4\") " pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:15 crc kubenswrapper[4985]: I0128 18:38:15.761330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.217078 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302013 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302060 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.302218 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") pod \"493defdf-169c-4278-b370-69068ec73439\" (UID: \"493defdf-169c-4278-b370-69068ec73439\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.303843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities" (OuterVolumeSpecName: "utilities") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.307421 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m" (OuterVolumeSpecName: "kube-api-access-dt55m") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "kube-api-access-dt55m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.364134 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "493defdf-169c-4278-b370-69068ec73439" (UID: "493defdf-169c-4278-b370-69068ec73439"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408907 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dt55m\" (UniqueName: \"kubernetes.io/projected/493defdf-169c-4278-b370-69068ec73439-kube-api-access-dt55m\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408948 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.408963 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/493defdf-169c-4278-b370-69068ec73439-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.786890 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8fg44" event={"ID":"493defdf-169c-4278-b370-69068ec73439","Type":"ContainerDied","Data":"80ceba888693469af3d53c546cb7c4eba0040a2f5c19424d7894edf743d991ac"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.786953 4985 scope.go:117] "RemoveContainer" containerID="63e0086da0afee817b7148269b8c4f5d7b0062e853c8143945bbd576d3419249" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.787118 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8fg44" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.810805 4985 generic.go:334] "Generic (PLEG): container finished" podID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerID="0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" exitCode=0 Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.811003 4985 generic.go:334] "Generic (PLEG): container finished" podID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerID="b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" exitCode=143 Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.810899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.811104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee"} Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.835010 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.840433 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.852922 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8fg44"] Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.892493 4985 scope.go:117] "RemoveContainer" containerID="0f31ce051029b23ddf495fadb6b6c6e764037b32b8a976658fc8f5f168e24bfd" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.930277 4985 scope.go:117] "RemoveContainer" containerID="bb466fa56833f63c962ba1cccca2fbc2223625dc1bb00585f9df84071452e8e0" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932514 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932580 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932618 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932682 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.932928 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") pod \"a366d8d5-30e8-4d85-aadc-af770270ffcf\" (UID: \"a366d8d5-30e8-4d85-aadc-af770270ffcf\") " Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.936149 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.936466 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs" (OuterVolumeSpecName: "logs") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.939233 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.940007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl" (OuterVolumeSpecName: "kube-api-access-9dvwl") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "kube-api-access-9dvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.944225 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts" (OuterVolumeSpecName: "scripts") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:16 crc kubenswrapper[4985]: I0128 18:38:16.974277 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.011231 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041936 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041985 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.041999 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9dvwl\" (UniqueName: \"kubernetes.io/projected/a366d8d5-30e8-4d85-aadc-af770270ffcf-kube-api-access-9dvwl\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042023 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a366d8d5-30e8-4d85-aadc-af770270ffcf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042036 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.042049 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a366d8d5-30e8-4d85-aadc-af770270ffcf-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.080535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data" (OuterVolumeSpecName: "config-data") pod "a366d8d5-30e8-4d85-aadc-af770270ffcf" (UID: "a366d8d5-30e8-4d85-aadc-af770270ffcf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.143225 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.143885 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144041 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144064 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") pod \"523590c1-de57-4248-aa7f-2c52024d649e\" (UID: \"523590c1-de57-4248-aa7f-2c52024d649e\") " Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.144753 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a366d8d5-30e8-4d85-aadc-af770270ffcf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.151841 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs" (OuterVolumeSpecName: "logs") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.158106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.163299 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57" (OuterVolumeSpecName: "kube-api-access-phx57") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "kube-api-access-phx57". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.198750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.219461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data" (OuterVolumeSpecName: "config-data") pod "523590c1-de57-4248-aa7f-2c52024d649e" (UID: "523590c1-de57-4248-aa7f-2c52024d649e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.223620 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-f49f9645f-bs9wr"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.247733 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248272 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-phx57\" (UniqueName: \"kubernetes.io/projected/523590c1-de57-4248-aa7f-2c52024d649e-kube-api-access-phx57\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248297 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248307 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/523590c1-de57-4248-aa7f-2c52024d649e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.248315 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/523590c1-de57-4248-aa7f-2c52024d649e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.283493 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="493defdf-169c-4278-b370-69068ec73439" path="/var/lib/kubelet/pods/493defdf-169c-4278-b370-69068ec73439/volumes" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.843143 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.848396 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"b38f86aab01647c33fd931b2887e8306fe6b60c3082f3c8a0524d15753040cbd"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.848434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"f6a56e7ca2cbe55d9d96a7ec5b4109a59c5bae6874eb564b5e45153daa640a8d"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.859971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"a366d8d5-30e8-4d85-aadc-af770270ffcf","Type":"ContainerDied","Data":"c2a05a5028ed951640a1c68987fde41ba3b23928ea5eb7e6830b545018a7b678"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.860035 4985 scope.go:117] "RemoveContainer" containerID="0c16b40db29be1f4541e29072a6720c2bc2a288a4cab20fe8917711e722a3ee0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.861471 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.883430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-59699bb574-kg5jx" event={"ID":"523590c1-de57-4248-aa7f-2c52024d649e","Type":"ContainerDied","Data":"b40a3df1dc9713a67151a11bf3d8f9d8a40a7e6355071ab385f578c55e29abe5"} Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.883546 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-59699bb574-kg5jx" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.916056 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.939221 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.949900 4985 scope.go:117] "RemoveContainer" containerID="b986e0e1f69c17cd2f90d083a6b23c51f162ab4207d710a60ef1171acf5b47ee" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.950091 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983314 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983858 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-utilities" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-utilities" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983882 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983890 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983898 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983904 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983924 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983931 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983943 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983960 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983966 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: E0128 18:38:17.983975 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-content" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.983981 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="extract-content" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984187 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984210 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="493defdf-169c-4278-b370-69068ec73439" containerName="registry-server" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984226 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="523590c1-de57-4248-aa7f-2c52024d649e" containerName="barbican-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984233 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.984262 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" containerName="cinder-api-log" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.985507 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.990048 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.990302 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 18:38:17 crc kubenswrapper[4985]: I0128 18:38:17.994503 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.024097 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-59699bb574-kg5jx"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.037443 4985 scope.go:117] "RemoveContainer" containerID="2698171664b1988b8d867c63a620b6267012b187c8c37cd874c7c2d885a085f6" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068437 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068501 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068527 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068548 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068567 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.068618 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.099411 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170463 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170485 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.170554 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.171526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.172155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/841350c5-b9e8-4331-9282-e129f8152153-logs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.172226 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/841350c5-b9e8-4331-9282-e129f8152153-etc-machine-id\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.178460 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-scripts\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.179123 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.180418 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.182505 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data-custom\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.183909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-public-tls-certs\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.189815 4985 scope.go:117] "RemoveContainer" containerID="12a6d8e4bde7f2aea885f58652606b47ee06325603d2e65299b0f8ec947adfe6" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.195200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/841350c5-b9e8-4331-9282-e129f8152153-config-data\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.198102 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z7r6k\" (UniqueName: \"kubernetes.io/projected/841350c5-b9e8-4331-9282-e129f8152153-kube-api-access-z7r6k\") pod \"cinder-api-0\" (UID: \"841350c5-b9e8-4331-9282-e129f8152153\") " pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.321638 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.899688 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.902043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerStarted","Data":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.906514 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-f49f9645f-bs9wr" event={"ID":"2177b5b3-0121-4ff8-93dd-2f9ef36560f4","Type":"ContainerStarted","Data":"69502d09c3c08ac438a5f391e8367403e3943212e34bd27ffee322b979a426f1"} Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.906668 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.953391 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.151288223 podStartE2EDuration="10.953367108s" podCreationTimestamp="2026-01-28 18:38:08 +0000 UTC" firstStartedPulling="2026-01-28 18:38:10.499489727 +0000 UTC m=+1501.326052548" lastFinishedPulling="2026-01-28 18:38:16.301568612 +0000 UTC m=+1507.128131433" observedRunningTime="2026-01-28 18:38:18.928224548 +0000 UTC m=+1509.754787369" watchObservedRunningTime="2026-01-28 18:38:18.953367108 +0000 UTC m=+1509.779929929" Jan 28 18:38:18 crc kubenswrapper[4985]: I0128 18:38:18.976591 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-f49f9645f-bs9wr" podStartSLOduration=3.976565713 podStartE2EDuration="3.976565713s" podCreationTimestamp="2026-01-28 18:38:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:18.949571681 +0000 UTC m=+1509.776134502" watchObservedRunningTime="2026-01-28 18:38:18.976565713 +0000 UTC m=+1509.803128534" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.130874 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.275600 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="523590c1-de57-4248-aa7f-2c52024d649e" path="/var/lib/kubelet/pods/523590c1-de57-4248-aa7f-2c52024d649e/volumes" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.276238 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a366d8d5-30e8-4d85-aadc-af770270ffcf" path="/var/lib/kubelet/pods/a366d8d5-30e8-4d85-aadc-af770270ffcf/volumes" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.353770 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.430327 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.431069 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" containerID="cri-o://c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" gracePeriod=10 Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.758526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.199:5353: connect: connection refused" Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.933642 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"7a5dbf9806674a8b402004bcb6241785559d2470172868f6b3f6355f4dbb8231"} Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.934662 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"5eb2c9b2d4b4c7eec82c9b4c50965c1dafe8e72106cb4de112b3e214c5037898"} Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.942318 4985 generic.go:334] "Generic (PLEG): container finished" podID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerID="c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" exitCode=0 Jan 28 18:38:19 crc kubenswrapper[4985]: I0128 18:38:19.942415 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.520055 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.576298 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-848676699d-9lbcr" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.621095 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740756 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740880 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740908 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740978 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.740997 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.741073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") pod \"960c828e-51af-4e3c-a916-513bc8cbb0ff\" (UID: \"960c828e-51af-4e3c-a916-513bc8cbb0ff\") " Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.765500 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd" (OuterVolumeSpecName: "kube-api-access-9r9fd") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "kube-api-access-9r9fd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.834321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.834540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config" (OuterVolumeSpecName: "config") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844521 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9r9fd\" (UniqueName: \"kubernetes.io/projected/960c828e-51af-4e3c-a916-513bc8cbb0ff-kube-api-access-9r9fd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844560 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.844603 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.853738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.856543 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.900663 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "960c828e-51af-4e3c-a916-513bc8cbb0ff" (UID: "960c828e-51af-4e3c-a916-513bc8cbb0ff"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946632 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946667 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.946679 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/960c828e-51af-4e3c-a916-513bc8cbb0ff-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.957418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"841350c5-b9e8-4331-9282-e129f8152153","Type":"ContainerStarted","Data":"a748d650d126ab8d46525fd8715fe314f85dc2f6816b2fac2b89d32e528f86ad"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.957797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961172 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" event={"ID":"960c828e-51af-4e3c-a916-513bc8cbb0ff","Type":"ContainerDied","Data":"dd0880e0b96ac3a23f885b549586af18ca3a6b0027c6f034c1105c8d228a817a"} Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961234 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7c67bffd47-2whmk" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.961243 4985 scope.go:117] "RemoveContainer" containerID="c4611bd9d414c781ca052ec4109964bd6c046f579d3ac38792bf0555f1041a71" Jan 28 18:38:20 crc kubenswrapper[4985]: I0128 18:38:20.983898 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.983877965 podStartE2EDuration="3.983877965s" podCreationTimestamp="2026-01-28 18:38:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:20.983562656 +0000 UTC m=+1511.810125497" watchObservedRunningTime="2026-01-28 18:38:20.983877965 +0000 UTC m=+1511.810440786" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.040449 4985 scope.go:117] "RemoveContainer" containerID="e23d36aeeab5ee663f101fb703501f68e124bafdaaddaec3cfc6864e9e9081f8" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.074036 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.089358 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7c67bffd47-2whmk"] Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.191307 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-77c7879f98-bcrvp" Jan 28 18:38:21 crc kubenswrapper[4985]: I0128 18:38:21.278340 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" path="/var/lib/kubelet/pods/960c828e-51af-4e3c-a916-513bc8cbb0ff/volumes" Jan 28 18:38:22 crc kubenswrapper[4985]: I0128 18:38:22.118107 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:22 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:22 crc kubenswrapper[4985]: > Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.171401 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:24 crc kubenswrapper[4985]: E0128 18:38:24.172982 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="init" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173056 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="init" Jan 28 18:38:24 crc kubenswrapper[4985]: E0128 18:38:24.173138 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173191 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.173509 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="960c828e-51af-4e3c-a916-513bc8cbb0ff" containerName="dnsmasq-dns" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.174406 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.176856 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.178169 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.178536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-664wv" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.183965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228749 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.228940 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331163 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331621 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.331703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.332142 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.332766 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.349105 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-openstack-config-secret\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.349268 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8f391e-0ed3-4969-b61b-5b9d602644fa-combined-ca-bundle\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.353508 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57stt\" (UniqueName: \"kubernetes.io/projected/1d8f391e-0ed3-4969-b61b-5b9d602644fa-kube-api-access-57stt\") pod \"openstackclient\" (UID: \"1d8f391e-0ed3-4969-b61b-5b9d602644fa\") " pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.476116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.505714 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 18:38:24 crc kubenswrapper[4985]: I0128 18:38:24.541321 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.019530 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" containerID="cri-o://a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" gracePeriod=30 Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.019588 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" containerID="cri-o://c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" gracePeriod=30 Jan 28 18:38:25 crc kubenswrapper[4985]: I0128 18:38:25.050044 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 18:38:25 crc kubenswrapper[4985]: W0128 18:38:25.064333 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1d8f391e_0ed3_4969_b61b_5b9d602644fa.slice/crio-19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e WatchSource:0}: Error finding container 19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e: Status 404 returned error can't find the container with id 19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.032192 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d8f391e-0ed3-4969-b61b-5b9d602644fa","Type":"ContainerStarted","Data":"19c2c6b8499bb9e5522440a21baf349cc34c095dc0e31b1ed34b87074564860e"} Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.035211 4985 generic.go:334] "Generic (PLEG): container finished" podID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" exitCode=0 Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.035259 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.535292 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.592944 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593029 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593073 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593305 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.593429 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") pod \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\" (UID: \"a93c21ad-4841-48c4-95a2-c2876a2fffd1\") " Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.595275 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.601025 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv" (OuterVolumeSpecName: "kube-api-access-m2wgv") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "kube-api-access-m2wgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.605016 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.606571 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts" (OuterVolumeSpecName: "scripts") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.684061 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697341 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697388 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697404 4985 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a93c21ad-4841-48c4-95a2-c2876a2fffd1-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697415 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.697428 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m2wgv\" (UniqueName: \"kubernetes.io/projected/a93c21ad-4841-48c4-95a2-c2876a2fffd1-kube-api-access-m2wgv\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.803454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data" (OuterVolumeSpecName: "config-data") pod "a93c21ad-4841-48c4-95a2-c2876a2fffd1" (UID: "a93c21ad-4841-48c4-95a2-c2876a2fffd1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:26 crc kubenswrapper[4985]: I0128 18:38:26.902962 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a93c21ad-4841-48c4-95a2-c2876a2fffd1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048727 4985 generic.go:334] "Generic (PLEG): container finished" podID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" exitCode=0 Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048797 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"a93c21ad-4841-48c4-95a2-c2876a2fffd1","Type":"ContainerDied","Data":"31388f0bf206620f4149df49b7f517c8ef12fb63e7bf921a506b07d05954b8ce"} Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048816 4985 scope.go:117] "RemoveContainer" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.048949 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.088241 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.152263 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.155188 4985 scope.go:117] "RemoveContainer" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.185155 4985 scope.go:117] "RemoveContainer" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.186712 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": container with ID starting with c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1 not found: ID does not exist" containerID="c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.186756 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1"} err="failed to get container status \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": rpc error: code = NotFound desc = could not find container \"c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1\": container with ID starting with c702a10cdab084cf90ed3127aadbddcb2b5567942e99df9dc13cf2ed72911bb1 not found: ID does not exist" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.186785 4985 scope.go:117] "RemoveContainer" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.187212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": container with ID starting with a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d not found: ID does not exist" containerID="a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.187327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d"} err="failed to get container status \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": rpc error: code = NotFound desc = could not find container \"a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d\": container with ID starting with a9184fcf170050de6feec987ab552a4583460aa30e11f3d13baaf83760b32b4d not found: ID does not exist" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.188758 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.189303 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189319 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: E0128 18:38:27.189341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189347 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189608 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="cinder-scheduler" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.189623 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" containerName="probe" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.190762 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.192987 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.217131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.276913 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a93c21ad-4841-48c4-95a2-c2876a2fffd1" path="/var/lib/kubelet/pods/a93c21ad-4841-48c4-95a2-c2876a2fffd1/volumes" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318557 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318673 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318702 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318738 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.318765 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420869 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.420990 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421070 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.421624 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.426992 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.427074 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-scripts\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.429521 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-config-data\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.439012 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.444165 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8kcg\" (UniqueName: \"kubernetes.io/projected/07cf4e1d-9eb6-491a-90a5-dc30af589bc0-kube-api-access-l8kcg\") pod \"cinder-scheduler-0\" (UID: \"07cf4e1d-9eb6-491a-90a5-dc30af589bc0\") " pod="openstack/cinder-scheduler-0" Jan 28 18:38:27 crc kubenswrapper[4985]: I0128 18:38:27.525115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 18:38:28 crc kubenswrapper[4985]: I0128 18:38:28.107352 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 18:38:28 crc kubenswrapper[4985]: W0128 18:38:28.114165 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07cf4e1d_9eb6_491a_90a5_dc30af589bc0.slice/crio-4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2 WatchSource:0}: Error finding container 4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2: Status 404 returned error can't find the container with id 4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2 Jan 28 18:38:29 crc kubenswrapper[4985]: I0128 18:38:29.103769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} Jan 28 18:38:29 crc kubenswrapper[4985]: I0128 18:38:29.104227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"4649fc8942b17521b7f8f69ff332256f4373b6fcb10a413bb103dc707e5ca7c2"} Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.119318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"534bfab617653e6a11bf66f4138bb11afac7d0216715a337a1291811d3bf5993"} Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.140035 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.140016274 podStartE2EDuration="3.140016274s" podCreationTimestamp="2026-01-28 18:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:30.136933537 +0000 UTC m=+1520.963496368" watchObservedRunningTime="2026-01-28 18:38:30.140016274 +0000 UTC m=+1520.966579115" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.278177 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.280699 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.283482 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.284286 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.285179 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.301282 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400937 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.400982 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401049 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401080 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.401127 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.502973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503042 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503099 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503119 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503133 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503176 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503211 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503243 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503830 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-log-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.503888 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/12d4e4cf-9153-4a32-9155-f9d13a248a26-run-httpd\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.511909 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-internal-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.518301 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-combined-ca-bundle\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.520091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-config-data\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.523790 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/12d4e4cf-9153-4a32-9155-f9d13a248a26-public-tls-certs\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.533671 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-etc-swift\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.533862 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c2ms\" (UniqueName: \"kubernetes.io/projected/12d4e4cf-9153-4a32-9155-f9d13a248a26-kube-api-access-2c2ms\") pod \"swift-proxy-5bdcb887dc-rxkm6\" (UID: \"12d4e4cf-9153-4a32-9155-f9d13a248a26\") " pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:30 crc kubenswrapper[4985]: I0128 18:38:30.603810 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:31 crc kubenswrapper[4985]: I0128 18:38:31.518949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-5bdcb887dc-rxkm6"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.088177 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.094628 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:32 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:32 crc kubenswrapper[4985]: > Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.188235 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"c76d58f590fb1f84e984d71f4424979c392b574109a172ab18e201a96d57db73"} Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.188312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"c9ef0b82442a9b3cac449cb5f4cc6374930a4ca3be1767ba0c3ecb60f09c6f17"} Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.214404 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.214835 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" containerID="cri-o://9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215844 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" containerID="cri-o://eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215931 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" containerID="cri-o://63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.215981 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" containerID="cri-o://a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" gracePeriod=30 Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.336451 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": read tcp 10.217.0.2:46134->10.217.0.202:3000: read: connection reset by peer" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.525838 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.979016 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.986291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993414 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-engine-config-data" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993647 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-heat-dockercfg-9xd8p" Jan 28 18:38:32 crc kubenswrapper[4985]: I0128 18:38:32.993778 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.027444 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.087778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088126 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088308 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.088364 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.091234 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.117683 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190497 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190544 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190575 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190616 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190631 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190688 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190743 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.190830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.209525 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.235234 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.246005 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.272625 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"heat-engine-5b5c69f9bd-9jvz9\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298419 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298629 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298656 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.298734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.303086 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.303627 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.304097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.324883 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.326726 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.358194 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379815 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" event={"ID":"12d4e4cf-9153-4a32-9155-f9d13a248a26","Type":"ContainerStarted","Data":"d5b1e2d40a41ff7b5f57c600340246acd209e59dba0454a65e70ad1ef8c68529"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379857 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.379868 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.408349 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"dnsmasq-dns-688b9f5b49-v8wbr\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.427832 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.432827 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.434480 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.500180 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-cfnapi-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502005 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" exitCode=0 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502034 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" exitCode=2 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502041 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" exitCode=0 Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.502112 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c"} Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529475 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.529878 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.586571 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641595 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.641658 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.670291 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.673989 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.675233 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.706303 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.707991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.723438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"heat-cfnapi-84b7b4c956-xs5qg\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.724193 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"heat-api-config-data" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.744656 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podStartSLOduration=3.7446259299999998 podStartE2EDuration="3.74462593s" podCreationTimestamp="2026-01-28 18:38:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:33.42972859 +0000 UTC m=+1524.256291431" watchObservedRunningTime="2026-01-28 18:38:33.74462593 +0000 UTC m=+1524.571188761" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775828 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775872 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.775974 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.776062 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.827203 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.878052 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879006 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879299 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.879569 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.906155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.911915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.914435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:33 crc kubenswrapper[4985]: I0128 18:38:33.915378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"heat-api-5965d558dc-cg7wv\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.030139 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.446588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:38:34 crc kubenswrapper[4985]: W0128 18:38:34.480143 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0db5c7c8_1c53_42d0_8e23_f1cba882d552.slice/crio-2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8 WatchSource:0}: Error finding container 2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8: Status 404 returned error can't find the container with id 2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8 Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.491782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.541863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerStarted","Data":"2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8"} Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.543414 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerStarted","Data":"124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106"} Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.812447 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.202:3000/\": dial tcp 10.217.0.202:3000: connect: connection refused" Jan 28 18:38:34 crc kubenswrapper[4985]: I0128 18:38:34.828723 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.040407 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.571757 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerID="c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e" exitCode=0 Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.571970 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.589276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerStarted","Data":"7f8aaec146afdcb274b6be4540ed468073cb056ab2a74bd69ec462b02099487a"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.598342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerStarted","Data":"18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.598539 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.600726 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerStarted","Data":"af15e77d0cac085450dbdbf09aea29f94aab86926bae124219c8abb6e3a9c5c2"} Jan 28 18:38:35 crc kubenswrapper[4985]: I0128 18:38:35.646080 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podStartSLOduration=3.646056133 podStartE2EDuration="3.646056133s" podCreationTimestamp="2026-01-28 18:38:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:35.636782731 +0000 UTC m=+1526.463345562" watchObservedRunningTime="2026-01-28 18:38:35.646056133 +0000 UTC m=+1526.472618954" Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.623716 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerStarted","Data":"1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71"} Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.624100 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:36 crc kubenswrapper[4985]: I0128 18:38:36.649124 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" podStartSLOduration=3.64910215 podStartE2EDuration="3.64910215s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:36.643540153 +0000 UTC m=+1527.470102994" watchObservedRunningTime="2026-01-28 18:38:36.64910215 +0000 UTC m=+1527.475664981" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.032547 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="841350c5-b9e8-4331-9282-e129f8152153" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.641434 4985 generic.go:334] "Generic (PLEG): container finished" podID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerID="a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" exitCode=0 Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.641513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed"} Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.817297 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.856636 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964416 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964534 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964643 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964684 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964711 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") pod \"15ab3d09-80d2-4a3b-84d8-09119b2be701\" (UID: \"15ab3d09-80d2-4a3b-84d8-09119b2be701\") " Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.964901 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.965469 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.965489 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/15ab3d09-80d2-4a3b-84d8-09119b2be701-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.977846 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts" (OuterVolumeSpecName: "scripts") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:37 crc kubenswrapper[4985]: I0128 18:38:37.977859 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp" (OuterVolumeSpecName: "kube-api-access-94qqp") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "kube-api-access-94qqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.015444 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068154 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068194 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.068205 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-94qqp\" (UniqueName: \"kubernetes.io/projected/15ab3d09-80d2-4a3b-84d8-09119b2be701-kube-api-access-94qqp\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.103407 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data" (OuterVolumeSpecName: "config-data") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.129958 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "15ab3d09-80d2-4a3b-84d8-09119b2be701" (UID: "15ab3d09-80d2-4a3b-84d8-09119b2be701"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.175898 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.176136 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/15ab3d09-80d2-4a3b-84d8-09119b2be701-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662795 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"15ab3d09-80d2-4a3b-84d8-09119b2be701","Type":"ContainerDied","Data":"24f37b343823af87929d4be979bf978ca07c8b7fe426ee346d1a058ab94e67be"} Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662883 4985 scope.go:117] "RemoveContainer" containerID="eb06a76353fe34ee6deffdc7776d0fbb5a1fc84d65807faeb9d2ecdc406f4df2" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.662949 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.733603 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.758886 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790019 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790625 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790648 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790654 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790675 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790681 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: E0128 18:38:38.790694 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790700 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790913 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="sg-core" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790929 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-central-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790948 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="proxy-httpd" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.790963 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" containerName="ceilometer-notification-agent" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.796740 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.801618 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.801744 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.824128 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923112 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923167 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923191 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.923233 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924146 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:38 crc kubenswrapper[4985]: I0128 18:38:38.924379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027474 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027578 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027652 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027699 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.027808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.028686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.028724 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.035358 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.035423 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.039378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.040281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.134311 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"ceilometer-0\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " pod="openstack/ceilometer-0" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.177794 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.288667 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15ab3d09-80d2-4a3b-84d8-09119b2be701" path="/var/lib/kubelet/pods/15ab3d09-80d2-4a3b-84d8-09119b2be701/volumes" Jan 28 18:38:39 crc kubenswrapper[4985]: I0128 18:38:39.421654 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.158052 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.160033 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.182909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.207309 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.208935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.232019 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.265328 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.266823 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.290309 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.356907 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.356984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357003 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357053 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357071 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357145 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357180 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.357237 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460092 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460192 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460288 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460314 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460344 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460363 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460425 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460495 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.460577 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.468310 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.474873 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.478018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.485076 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.485921 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.489449 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.489668 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"heat-engine-54bf646c6-b6zb2\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.491091 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"heat-cfnapi-788f4c49c5-d7wbz\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.532589 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.565165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.569066 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.572132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.575104 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.593118 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"heat-api-5c6549b6bc-9j9qm\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.615679 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.622072 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.783792 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:40 crc kubenswrapper[4985]: I0128 18:38:40.889888 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:41 crc kubenswrapper[4985]: I0128 18:38:41.185720 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:38:41 crc kubenswrapper[4985]: I0128 18:38:41.185775 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.111642 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:42 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:42 crc kubenswrapper[4985]: > Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.928190 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.941444 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.981323 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:42 crc kubenswrapper[4985]: I0128 18:38:42.985079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.007237 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.009725 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.010115 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-public-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.010310 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-api-internal-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.013294 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-public-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.013544 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-heat-cfnapi-internal-svc" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.026197 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.041629 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127693 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127868 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.127914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128214 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128452 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128553 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128910 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.128996 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.129079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.231935 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232064 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232135 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232195 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232264 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232339 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232406 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.232429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.253779 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254176 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.254411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.255048 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.255570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.257419 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.257951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.258360 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.259556 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.266618 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"heat-api-78f74b8b49-ngj6j\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.274097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"heat-cfnapi-db4c676cd-xbwzr\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.383973 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.384560 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.430589 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.603858 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:43 crc kubenswrapper[4985]: I0128 18:38:43.604156 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" containerID="cri-o://911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" gracePeriod=10 Jan 28 18:38:44 crc kubenswrapper[4985]: I0128 18:38:44.265749 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:44 crc kubenswrapper[4985]: I0128 18:38:44.348118 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.206:5353: connect: connection refused" Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.773138 4985 generic.go:334] "Generic (PLEG): container finished" podID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerID="911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" exitCode=0 Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.773188 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106"} Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.778611 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-f49f9645f-bs9wr" Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.841703 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.841938 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8b8b566d-89qjp" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" containerID="cri-o://a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" gracePeriod=30 Jan 28 18:38:45 crc kubenswrapper[4985]: I0128 18:38:45.842439 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d8b8b566d-89qjp" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" containerID="cri-o://f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" gracePeriod=30 Jan 28 18:38:46 crc kubenswrapper[4985]: I0128 18:38:46.788719 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerID="f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" exitCode=0 Jan 28 18:38:46 crc kubenswrapper[4985]: I0128 18:38:46.789052 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb"} Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.287935 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.288929 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstackclient,Image:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,Command:[/bin/sleep],Args:[infinity],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfh5b6h56dh588h4hd5h549h566hbdh68fh56h5dbh5f8h5ch5dch5f8h55dh679h67dh79h678hbh5cch5b8h544h577h576hcfhb8h696h5bbh54q,ValueFrom:nil,},EnvVar{Name:OS_CLOUD,Value:default,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_CA_CERT,Value:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_HOST,Value:metric-storage-prometheus.openstack.svc,ValueFrom:nil,},EnvVar{Name:PROMETHEUS_PORT,Value:9090,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:openstack-config,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/.config/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/home/cloud-admin/cloudrc,SubPath:cloudrc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-57stt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42401,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42401,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstackclient_openstack(1d8f391e-0ed3-4969-b61b-5b9d602644fa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.290021 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstackclient" podUID="1d8f391e-0ed3-4969-b61b-5b9d602644fa" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.362675 4985 scope.go:117] "RemoveContainer" containerID="63b255400568dba8dbf5bfd10074c794164e917c67207e6067421496c44dc275" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.773438 4985 scope.go:117] "RemoveContainer" containerID="a44911563543df4ca2f6c7e7c98eed8a29c0db3a0dc60c6c03eff54813b88aed" Jan 28 18:38:47 crc kubenswrapper[4985]: I0128 18:38:47.890882 4985 scope.go:117] "RemoveContainer" containerID="9601c8e2c8b6e4ccc92d4c33c1be8c9239fcb6b941700f4c60e2af655b805d3c" Jan 28 18:38:47 crc kubenswrapper[4985]: E0128 18:38:47.890973 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openstackclient\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified\\\"\"" pod="openstack/openstackclient" podUID="1d8f391e-0ed3-4969-b61b-5b9d602644fa" Jan 28 18:38:48 crc kubenswrapper[4985]: E0128 18:38:48.138240 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod89fc2c75_41eb_441e_a171_5c716b823277.slice/crio-conmon-06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00.scope\": RecentStats: unable to find data in memory cache]" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.278728 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364076 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364246 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364418 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364484 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364521 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.364668 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") pod \"c3a8f8a9-e888-4754-94da-0ef0e972c995\" (UID: \"c3a8f8a9-e888-4754-94da-0ef0e972c995\") " Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.429692 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q" (OuterVolumeSpecName: "kube-api-access-nqm7q") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "kube-api-access-nqm7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.468261 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqm7q\" (UniqueName: \"kubernetes.io/projected/c3a8f8a9-e888-4754-94da-0ef0e972c995-kube-api-access-nqm7q\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.755729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.778437 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.781437 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config" (OuterVolumeSpecName: "config") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.782126 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.791305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.798170 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c3a8f8a9-e888-4754-94da-0ef0e972c995" (UID: "c3a8f8a9-e888-4754-94da-0ef0e972c995"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.851821 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880320 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880347 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880356 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.880364 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3a8f8a9-e888-4754-94da-0ef0e972c995-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881180 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881171 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6578955fd5-j67tm" event={"ID":"c3a8f8a9-e888-4754-94da-0ef0e972c995","Type":"ContainerDied","Data":"2a25bfd428dd4118e93b5a07dd33258e59fc68c31465c5aecff463045a099bfc"} Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.881236 4985 scope.go:117] "RemoveContainer" containerID="911e0b914f7e2d1c2f9a2d3c862476c93ef10ae9407c5181272ef05180c08106" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.921179 4985 scope.go:117] "RemoveContainer" containerID="c3d6846527cefd541216dec8dce99f14831f1db9f838810b3978ccef4ebab806" Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.935408 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.954131 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6578955fd5-j67tm"] Jan 28 18:38:48 crc kubenswrapper[4985]: I0128 18:38:48.985319 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:38:49 crc kubenswrapper[4985]: W0128 18:38:49.006432 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod261340dd_15fd_43d9_8db3_3de095d8728a.slice/crio-21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a WatchSource:0}: Error finding container 21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a: Status 404 returned error can't find the container with id 21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.020302 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:38:49 crc kubenswrapper[4985]: W0128 18:38:49.031574 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf0c2a92a_343c_42fa_a740_8bb10701d271.slice/crio-949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd WatchSource:0}: Error finding container 949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd: Status 404 returned error can't find the container with id 949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.071399 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.092299 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.106259 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.283450 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" path="/var/lib/kubelet/pods/c3a8f8a9-e888-4754-94da-0ef0e972c995/volumes" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.905068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"b124dd8e680ed4c6b21bcff9be1e93e485ca3c7ce4f5a633c143c727e10e2e74"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907221 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerStarted","Data":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907277 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-5965d558dc-cg7wv" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" containerID="cri-o://06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" gracePeriod=60 Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.907296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.909940 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerStarted","Data":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.910049 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" containerID="cri-o://0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" gracePeriod=60 Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.910132 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.917234 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerStarted","Data":"949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.920350 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerStarted","Data":"21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.924549 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.933655 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5965d558dc-cg7wv" podStartSLOduration=4.64858077 podStartE2EDuration="16.933630024s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="2026-01-28 18:38:35.09941845 +0000 UTC m=+1525.925981271" lastFinishedPulling="2026-01-28 18:38:47.384467704 +0000 UTC m=+1538.211030525" observedRunningTime="2026-01-28 18:38:49.928054116 +0000 UTC m=+1540.754616947" watchObservedRunningTime="2026-01-28 18:38:49.933630024 +0000 UTC m=+1540.760192845" Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.937654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerStarted","Data":"c2cd5ecab7f62d49a442677c7f74b95e91134604fb9c330ec7bb5b250544e223"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.941125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"81214ec8d253d3da7a8b05fb6b49e40b2d03873d9fbc8130d3d5a18dff66c068"} Jan 28 18:38:49 crc kubenswrapper[4985]: I0128 18:38:49.986141 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" podStartSLOduration=4.370591951 podStartE2EDuration="16.986118116s" podCreationTimestamp="2026-01-28 18:38:33 +0000 UTC" firstStartedPulling="2026-01-28 18:38:34.870458636 +0000 UTC m=+1525.697021457" lastFinishedPulling="2026-01-28 18:38:47.485984801 +0000 UTC m=+1538.312547622" observedRunningTime="2026-01-28 18:38:49.948771831 +0000 UTC m=+1540.775334652" watchObservedRunningTime="2026-01-28 18:38:49.986118116 +0000 UTC m=+1540.812680937" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.983750 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerStarted","Data":"df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.984112 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.986053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.988092 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.989962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerStarted","Data":"ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.990461 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.992131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerStarted","Data":"c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321"} Jan 28 18:38:50 crc kubenswrapper[4985]: I0128 18:38:50.992312 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.035490 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-54bf646c6-b6zb2" podStartSLOduration=11.035471161 podStartE2EDuration="11.035471161s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.022088723 +0000 UTC m=+1541.848651564" watchObservedRunningTime="2026-01-28 18:38:51.035471161 +0000 UTC m=+1541.862033982" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.046615 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-78f74b8b49-ngj6j" podStartSLOduration=9.046595895 podStartE2EDuration="9.046595895s" podCreationTimestamp="2026-01-28 18:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.003994312 +0000 UTC m=+1541.830557133" watchObservedRunningTime="2026-01-28 18:38:51.046595895 +0000 UTC m=+1541.873158716" Jan 28 18:38:51 crc kubenswrapper[4985]: I0128 18:38:51.070077 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podStartSLOduration=9.070057367 podStartE2EDuration="9.070057367s" podCreationTimestamp="2026-01-28 18:38:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:51.038746493 +0000 UTC m=+1541.865309334" watchObservedRunningTime="2026-01-28 18:38:51.070057367 +0000 UTC m=+1541.896620188" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.017190 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.018159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.101423 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:38:52 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:38:52 crc kubenswrapper[4985]: > Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.109224 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-5c6549b6bc-9j9qm" podStartSLOduration=12.109199605 podStartE2EDuration="12.109199605s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:52.09060901 +0000 UTC m=+1542.917171831" watchObservedRunningTime="2026-01-28 18:38:52.109199605 +0000 UTC m=+1542.935762426" Jan 28 18:38:52 crc kubenswrapper[4985]: I0128 18:38:52.129242 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podStartSLOduration=12.12922272 podStartE2EDuration="12.12922272s" podCreationTimestamp="2026-01-28 18:38:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:38:52.126776171 +0000 UTC m=+1542.953339002" watchObservedRunningTime="2026-01-28 18:38:52.12922272 +0000 UTC m=+1542.955785541" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.032946 4985 generic.go:334] "Generic (PLEG): container finished" podID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" exitCode=1 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.033015 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c"} Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.033849 4985 scope.go:117] "RemoveContainer" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037441 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" exitCode=1 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1"} Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.037871 4985 scope.go:117] "RemoveContainer" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.287166 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.287752 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" containerID="cri-o://c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" gracePeriod=30 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.288031 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" containerID="cri-o://c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" gracePeriod=30 Jan 28 18:38:53 crc kubenswrapper[4985]: I0128 18:38:53.452027 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.052030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerStarted","Data":"7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.052176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.054190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerStarted","Data":"abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.054420 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.057131 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2c9b96-2033-4221-8667-e24507c76269" containerID="c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" exitCode=143 Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.057163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f"} Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.876668 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.877762 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" containerID="cri-o://824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" gracePeriod=30 Jan 28 18:38:54 crc kubenswrapper[4985]: I0128 18:38:54.877879 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" containerID="cri-o://1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" gracePeriod=30 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.070010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.071973 4985 generic.go:334] "Generic (PLEG): container finished" podID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" exitCode=1 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072098 4985 scope.go:117] "RemoveContainer" containerID="6e0dbbd9195d83f0174fb3b0f99757882af3ab72ec8d5a94b8cd365a8be3cc2c" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.072821 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:55 crc kubenswrapper[4985]: E0128 18:38:55.073076 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077280 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" exitCode=1 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077405 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.077766 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:55 crc kubenswrapper[4985]: E0128 18:38:55.078016 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.085805 4985 generic.go:334] "Generic (PLEG): container finished" podID="183853eb-591f-4859-9824-550b76c6f115" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" exitCode=143 Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.085856 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.160624 4985 scope.go:117] "RemoveContainer" containerID="33ba8acc7f6f2b8493215672a3f6990f3e5a51dcbbcec487f89cacb4a7d893e1" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.533687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:38:55 crc kubenswrapper[4985]: I0128 18:38:55.891905 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.101719 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:56 crc kubenswrapper[4985]: E0128 18:38:56.102371 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.106860 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:56 crc kubenswrapper[4985]: E0128 18:38:56.107081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.114684 4985 generic.go:334] "Generic (PLEG): container finished" podID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerID="a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" exitCode=0 Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.114761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7"} Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.122893 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc"} Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.132801 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.558943 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:50556->10.217.0.192:9292: read: connection reset by peer" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.559845 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.192:9292/healthcheck\": read tcp 10.217.0.2:50566->10.217.0.192:9292: read: connection reset by peer" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.746401 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.811325 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.811809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812415 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.812501 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") pod \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\" (UID: \"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25\") " Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.816481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m" (OuterVolumeSpecName: "kube-api-access-x2q6m") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "kube-api-access-x2q6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.864261 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.918204 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2q6m\" (UniqueName: \"kubernetes.io/projected/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-kube-api-access-x2q6m\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:56 crc kubenswrapper[4985]: I0128 18:38:56.918238 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.024353 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config" (OuterVolumeSpecName: "config") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.049232 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.122925 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.122960 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.148649 4985 generic.go:334] "Generic (PLEG): container finished" podID="8c2c9b96-2033-4221-8667-e24507c76269" containerID="c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" exitCode=0 Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.148708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167520 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d8b8b566d-89qjp" event={"ID":"8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25","Type":"ContainerDied","Data":"c9f68ac609dd2f41623830c63a61e02d6c06dc430a7f02a9f5349b8bf758436d"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167580 4985 scope.go:117] "RemoveContainer" containerID="f57d4bc985319a4e7bd60f9422a7035d136988dd0fb6ceddd52937e21d4ac9bb" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.167727 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d8b8b566d-89qjp" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.175098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" (UID: "8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.196353 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b"} Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.197027 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.197187 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:38:57 crc kubenswrapper[4985]: E0128 18:38:57.197425 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-api\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-api pod=heat-api-5c6549b6bc-9j9qm_openstack(c2d3f9ad-30d3-4e69-9229-f84c7b43b341)\"" pod="openstack/heat-api-5c6549b6bc-9j9qm" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" Jan 28 18:38:57 crc kubenswrapper[4985]: E0128 18:38:57.197432 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-cfnapi\" with CrashLoopBackOff: \"back-off 10s restarting failed container=heat-cfnapi pod=heat-cfnapi-788f4c49c5-d7wbz_openstack(c96952df-fe61-4b70-a166-ebf0dc93bb94)\"" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.216661 4985 scope.go:117] "RemoveContainer" containerID="a733625bfb47d7059258bc779c698483b4c78dfaa9ccfa77793a3686b76016a7" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.225504 4985 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.267627 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331631 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331713 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.331980 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.332076 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") pod \"8c2c9b96-2033-4221-8667-e24507c76269\" (UID: \"8c2c9b96-2033-4221-8667-e24507c76269\") " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.333969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.334538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs" (OuterVolumeSpecName: "logs") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.342049 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts" (OuterVolumeSpecName: "scripts") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.342319 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7" (OuterVolumeSpecName: "kube-api-access-nh6l7") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "kube-api-access-nh6l7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.371729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.375929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (OuterVolumeSpecName: "glance") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "pvc-a28b8b70-fd49-47a9-9731-34913060b77f". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.405230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data" (OuterVolumeSpecName: "config-data") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.421396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "8c2c9b96-2033-4221-8667-e24507c76269" (UID: "8c2c9b96-2033-4221-8667-e24507c76269"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435221 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435287 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" " Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435301 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435311 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435320 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8c2c9b96-2033-4221-8667-e24507c76269-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435328 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nh6l7\" (UniqueName: \"kubernetes.io/projected/8c2c9b96-2033-4221-8667-e24507c76269-kube-api-access-nh6l7\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435338 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.435346 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8c2c9b96-2033-4221-8667-e24507c76269-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.462087 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.462240 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-a28b8b70-fd49-47a9-9731-34913060b77f" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f") on node "crc" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.537102 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.609131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:57 crc kubenswrapper[4985]: I0128 18:38:57.620157 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d8b8b566d-89qjp"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"8c2c9b96-2033-4221-8667-e24507c76269","Type":"ContainerDied","Data":"43d735c182cbb81ec5017199eb78a2029759022896fdabfe1470a42d01bd6b7b"} Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238643 4985 scope.go:117] "RemoveContainer" containerID="c202d2036ca2a524c7fa057270b0486dc059f15b87694a6661d8c1bd8fb91016" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.238649 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.311146 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.325992 4985 scope.go:117] "RemoveContainer" containerID="c1278cfba933f75936a9894cfaa710f2d276954aafea6a97d46314226d60c19f" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.326367 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357045 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357513 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357527 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357546 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357553 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357566 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="init" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357573 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="init" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357587 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357593 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357612 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357618 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: E0128 18:38:58.357634 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357639 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357847 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-api" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357870 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357881 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3a8f8a9-e888-4754-94da-0ef0e972c995" containerName="dnsmasq-dns" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357893 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2c9b96-2033-4221-8667-e24507c76269" containerName="glance-log" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.357901 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" containerName="neutron-httpd" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.359054 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.362163 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.362995 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.381206 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.395526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:40330->10.217.0.193:9292: read: connection reset by peer" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.395737 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-internal-api-0" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.193:9292/healthcheck\": read tcp 10.217.0.2:40336->10.217.0.193:9292: read: connection reset by peer" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454312 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454433 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.454897 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455098 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455172 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.455321 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.557928 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558080 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558271 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558308 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.558343 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.559371 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-logs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.559660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.565885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.566885 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-scripts\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.568454 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.568495 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2d6568af50c46d048a9023d9ac84db4baa0cf8b023fb9ef6c59e622b024bcc77/globalmount\"" pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.575695 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-config-data\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.577178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.589445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txbxd\" (UniqueName: \"kubernetes.io/projected/9ff4e22d-1c99-4c30-9eaa-3225c1e868c7-kube-api-access-txbxd\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.677635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-a28b8b70-fd49-47a9-9731-34913060b77f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-a28b8b70-fd49-47a9-9731-34913060b77f\") pod \"glance-default-external-api-0\" (UID: \"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7\") " pod="openstack/glance-default-external-api-0" Jan 28 18:38:58 crc kubenswrapper[4985]: I0128 18:38:58.979625 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.179551 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.371922 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2c9b96-2033-4221-8667-e24507c76269" path="/var/lib/kubelet/pods/8c2c9b96-2033-4221-8667-e24507c76269/volumes" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.376469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25" path="/var/lib/kubelet/pods/8ccda71b-4bfb-46ef-9cf1-22a1df1d8c25/volumes" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392540 4985 generic.go:334] "Generic (PLEG): container finished" podID="183853eb-591f-4859-9824-550b76c6f115" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" exitCode=0 Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392676 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"183853eb-591f-4859-9824-550b76c6f115","Type":"ContainerDied","Data":"3032950d6605333705d222c5cf7752eabb2ff3aa233f4490427968658cbe487f"} Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392700 4985 scope.go:117] "RemoveContainer" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.392704 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456322 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456378 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.456414 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457039 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457168 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457202 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.457306 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") pod \"183853eb-591f-4859-9824-550b76c6f115\" (UID: \"183853eb-591f-4859-9824-550b76c6f115\") " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.466037 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.466853 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs" (OuterVolumeSpecName: "logs") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.472356 4985 scope.go:117] "RemoveContainer" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500135 4985 scope.go:117] "RemoveContainer" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: E0128 18:38:59.500633 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": container with ID starting with 1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951 not found: ID does not exist" containerID="1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500681 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951"} err="failed to get container status \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": rpc error: code = NotFound desc = could not find container \"1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951\": container with ID starting with 1ec2b44fa5d3412417f9af2901041ce3f7df3ec4452ba3eb221562124c626951 not found: ID does not exist" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500705 4985 scope.go:117] "RemoveContainer" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: E0128 18:38:59.500940 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": container with ID starting with 824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c not found: ID does not exist" containerID="824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.500962 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c"} err="failed to get container status \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": rpc error: code = NotFound desc = could not find container \"824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c\": container with ID starting with 824baf003360a504fa8af1246aaa82fe073fe894a62643951d415e1b02a9a66c not found: ID does not exist" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.515542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts" (OuterVolumeSpecName: "scripts") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.524634 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx" (OuterVolumeSpecName: "kube-api-access-vsqtx") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "kube-api-access-vsqtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564426 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564465 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564476 4985 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/183853eb-591f-4859-9824-550b76c6f115-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.564486 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vsqtx\" (UniqueName: \"kubernetes.io/projected/183853eb-591f-4859-9824-550b76c6f115-kube-api-access-vsqtx\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.783593 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (OuterVolumeSpecName: "glance") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "pvc-515c3b80-2464-4146-928c-cf9de6a379dc". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.832771 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.887917 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" " Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.887952 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.922247 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data" (OuterVolumeSpecName: "config-data") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.928281 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 18:38:59 crc kubenswrapper[4985]: I0128 18:38:59.990868 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.005471 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "183853eb-591f-4859-9824-550b76c6f115" (UID: "183853eb-591f-4859-9824-550b76c6f115"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.032962 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.033418 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-515c3b80-2464-4146-928c-cf9de6a379dc" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc") on node "crc" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.095115 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/183853eb-591f-4859-9824-550b76c6f115-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.095162 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.341316 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.359551 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375100 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: E0128 18:39:00.375603 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375615 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: E0128 18:39:00.375634 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375892 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-httpd" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.375926 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="183853eb-591f-4859-9824-550b76c6f115" containerName="glance-log" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.377106 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.383938 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.384127 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.411178 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerStarted","Data":"2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080"} Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422423 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" containerID="cri-o://6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422499 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" containerID="cri-o://2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422533 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" containerID="cri-o://a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.422543 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" containerID="cri-o://ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" gracePeriod=30 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.439573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"1624e18dccc8a03d5689dd5379b5128a85d73c1b1de90d097d616bfae8ab0542"} Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.484689 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=12.619858807 podStartE2EDuration="22.484661502s" podCreationTimestamp="2026-01-28 18:38:38 +0000 UTC" firstStartedPulling="2026-01-28 18:38:49.024509297 +0000 UTC m=+1539.851072118" lastFinishedPulling="2026-01-28 18:38:58.889311992 +0000 UTC m=+1549.715874813" observedRunningTime="2026-01-28 18:39:00.456986731 +0000 UTC m=+1551.283549552" watchObservedRunningTime="2026-01-28 18:39:00.484661502 +0000 UTC m=+1551.311224323" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.504704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505136 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505219 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505268 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505414 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505596 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505644 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.505721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.607925 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608025 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608165 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608198 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.608275 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610356 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610476 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.610968 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-logs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.611203 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d7b0993c-0b43-44d7-8498-6808f2a1439e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.618164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.619039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.622686 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.632178 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7b0993c-0b43-44d7-8498-6808f2a1439e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.681114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdbsj\" (UniqueName: \"kubernetes.io/projected/d7b0993c-0b43-44d7-8498-6808f2a1439e-kube-api-access-cdbsj\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.691772 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.692067 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/d04256428a5045d3b55ec61489edb632decdf9f4666f3e6952b725d307784bb2/globalmount\"" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.873479 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.907002 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-515c3b80-2464-4146-928c-cf9de6a379dc\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-515c3b80-2464-4146-928c-cf9de6a379dc\") pod \"glance-default-internal-api-0\" (UID: \"d7b0993c-0b43-44d7-8498-6808f2a1439e\") " pod="openstack/glance-default-internal-api-0" Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.988886 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.989320 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" containerID="cri-o://18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" gracePeriod=60 Jan 28 18:39:00 crc kubenswrapper[4985]: I0128 18:39:00.997887 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.346372 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="183853eb-591f-4859-9824-550b76c6f115" path="/var/lib/kubelet/pods/183853eb-591f-4859-9824-550b76c6f115/volumes" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.508486 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"1d8f391e-0ed3-4969-b61b-5b9d602644fa","Type":"ContainerStarted","Data":"1661f6106a354eeb8001c50cfed327742713be4cb739c514d329c311714e9193"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540488 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" exitCode=0 Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540553 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" exitCode=2 Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540590 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.540630 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b"} Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.545598 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.737584945 podStartE2EDuration="37.545575474s" podCreationTimestamp="2026-01-28 18:38:24 +0000 UTC" firstStartedPulling="2026-01-28 18:38:25.066631831 +0000 UTC m=+1515.893194652" lastFinishedPulling="2026-01-28 18:38:59.87462236 +0000 UTC m=+1550.701185181" observedRunningTime="2026-01-28 18:39:01.5298318 +0000 UTC m=+1552.356394621" watchObservedRunningTime="2026-01-28 18:39:01.545575474 +0000 UTC m=+1552.372138295" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.641933 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.751366 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.869507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:01 crc kubenswrapper[4985]: I0128 18:39:01.882300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.142440 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:39:02 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:39:02 crc kubenswrapper[4985]: > Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.206477 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.284837 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.354918 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442622 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.442895 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.443048 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") pod \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\" (UID: \"c2d3f9ad-30d3-4e69-9229-f84c7b43b341\") " Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.461753 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg" (OuterVolumeSpecName: "kube-api-access-p66bg") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "kube-api-access-p66bg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.461922 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.500684 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.542054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data" (OuterVolumeSpecName: "config-data") pod "c2d3f9ad-30d3-4e69-9229-f84c7b43b341" (UID: "c2d3f9ad-30d3-4e69-9229-f84c7b43b341"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553001 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553346 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553358 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p66bg\" (UniqueName: \"kubernetes.io/projected/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-kube-api-access-p66bg\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.553371 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c2d3f9ad-30d3-4e69-9229-f84c7b43b341-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.582822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5c6549b6bc-9j9qm" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.583453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5c6549b6bc-9j9qm" event={"ID":"c2d3f9ad-30d3-4e69-9229-f84c7b43b341","Type":"ContainerDied","Data":"b124dd8e680ed4c6b21bcff9be1e93e485ca3c7ce4f5a633c143c727e10e2e74"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.583525 4985 scope.go:117] "RemoveContainer" containerID="7759784baf4f1c964708f6c0104403ba9a3a6234690a3795821dabbc5d0d6ea2" Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.619384 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" exitCode=0 Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.619483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.638227 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"b81cdd66bb8c798116c98e56da7c17cc64e9b25f2282b923ca2a69fdf3290ba0"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.660626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"d33d17fd0dd647981ed09e99c772fb63ca0e1d2f6c1edf08c85f3bb830b8d000"} Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.801075 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:02 crc kubenswrapper[4985]: I0128 18:39:02.868618 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5c6549b6bc-9j9qm"] Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.063928 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.177709 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.177896 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.178060 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.178166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") pod \"c96952df-fe61-4b70-a166-ebf0dc93bb94\" (UID: \"c96952df-fe61-4b70-a166-ebf0dc93bb94\") " Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.184852 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.189063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq" (OuterVolumeSpecName: "kube-api-access-kscsq") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "kube-api-access-kscsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.255800 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.280073 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data" (OuterVolumeSpecName: "config-data") pod "c96952df-fe61-4b70-a166-ebf0dc93bb94" (UID: "c96952df-fe61-4b70-a166-ebf0dc93bb94"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282118 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282152 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282167 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c96952df-fe61-4b70-a166-ebf0dc93bb94-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.282182 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kscsq\" (UniqueName: \"kubernetes.io/projected/c96952df-fe61-4b70-a166-ebf0dc93bb94-kube-api-access-kscsq\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.287882 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" path="/var/lib/kubelet/pods/c2d3f9ad-30d3-4e69-9229-f84c7b43b341/volumes" Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.335753 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.343538 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.346572 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:39:03 crc kubenswrapper[4985]: E0128 18:39:03.346651 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.837502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"9ff4e22d-1c99-4c30-9eaa-3225c1e868c7","Type":"ContainerStarted","Data":"bcbb77df20289a96e57c3bdab8e83977f2e8aed07c87f906ad623466ac2e0388"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.876492 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"0d8e891cef15be2548a1fc103989cfe6a80da804e12c0a1f0bb4394f9d942622"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.906184 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=5.906157489 podStartE2EDuration="5.906157489s" podCreationTimestamp="2026-01-28 18:38:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:03.871443299 +0000 UTC m=+1554.698006130" watchObservedRunningTime="2026-01-28 18:39:03.906157489 +0000 UTC m=+1554.732720310" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" event={"ID":"c96952df-fe61-4b70-a166-ebf0dc93bb94","Type":"ContainerDied","Data":"81214ec8d253d3da7a8b05fb6b49e40b2d03873d9fbc8130d3d5a18dff66c068"} Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914106 4985 scope.go:117] "RemoveContainer" containerID="abb96fd7fb05331537dd34a1c0b5788fa284926a44ec1c5c33fef6bf3a68b915" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.914301 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-788f4c49c5-d7wbz" Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.977317 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:03 crc kubenswrapper[4985]: I0128 18:39:03.993974 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-788f4c49c5-d7wbz"] Jan 28 18:39:04 crc kubenswrapper[4985]: I0128 18:39:04.926559 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d7b0993c-0b43-44d7-8498-6808f2a1439e","Type":"ContainerStarted","Data":"4668d03328d8733b473a0bc4e38e872cd4c65187b388112bf05d3b58cdf0c96b"} Jan 28 18:39:04 crc kubenswrapper[4985]: I0128 18:39:04.955998 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.955980438 podStartE2EDuration="4.955980438s" podCreationTimestamp="2026-01-28 18:39:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:04.950161214 +0000 UTC m=+1555.776724035" watchObservedRunningTime="2026-01-28 18:39:04.955980438 +0000 UTC m=+1555.782543259" Jan 28 18:39:05 crc kubenswrapper[4985]: I0128 18:39:05.280441 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" path="/var/lib/kubelet/pods/c96952df-fe61-4b70-a166-ebf0dc93bb94/volumes" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.258728 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259574 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259592 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259619 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259627 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.259651 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259659 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259927 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259942 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259965 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.259983 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c96952df-fe61-4b70-a166-ebf0dc93bb94" containerName="heat-cfnapi" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.260997 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.282037 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.356068 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:06 crc kubenswrapper[4985]: E0128 18:39:06.363974 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.364005 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d3f9ad-30d3-4e69-9229-f84c7b43b341" containerName="heat-api" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.366860 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.375324 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.375534 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.399723 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.399820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.400625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.400817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.462834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.465205 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.502745 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.502904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503005 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503051 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503108 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.503149 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.504060 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.505063 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.508367 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.532096 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"nova-api-f01b-account-create-update-b985r\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.533773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"nova-api-db-create-tq8xx\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.563086 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.565132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.581224 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.583156 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.584105 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.600728 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.608944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.636519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"nova-cell0-db-create-jqvzw\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.640369 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.655977 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711303 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.711533 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.717167 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.718779 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.745674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"nova-cell1-db-create-mzbqq\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.769045 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.770508 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.772469 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.787020 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.788131 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814096 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814442 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.814478 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.819702 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.834997 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"nova-cell0-b80b-account-create-update-mrvzq\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.861729 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.874944 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.917869 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.918380 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.918877 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:06 crc kubenswrapper[4985]: I0128 18:39:06.954201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"nova-cell1-7b9a-account-create-update-l47bt\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.204513 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.231663 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:39:07 crc kubenswrapper[4985]: W0128 18:39:07.498540 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc08dbb5_2423_4fe9_8c21_a668459cad74.slice/crio-2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6 WatchSource:0}: Error finding container 2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6: Status 404 returned error can't find the container with id 2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6 Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.522588 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.827332 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.902344 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:39:07 crc kubenswrapper[4985]: I0128 18:39:07.934303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.017989 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.018770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerStarted","Data":"416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.026572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerStarted","Data":"2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.033000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerStarted","Data":"c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.033046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerStarted","Data":"2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.043054 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerStarted","Data":"6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.043100 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerStarted","Data":"dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.051334 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerStarted","Data":"a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05"} Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.063764 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-f01b-account-create-update-b985r" podStartSLOduration=2.063744507 podStartE2EDuration="2.063744507s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:08.051722478 +0000 UTC m=+1558.878285299" watchObservedRunningTime="2026-01-28 18:39:08.063744507 +0000 UTC m=+1558.890307328" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.083399 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-db-create-tq8xx" podStartSLOduration=2.083375891 podStartE2EDuration="2.083375891s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:08.073187784 +0000 UTC m=+1558.899750615" watchObservedRunningTime="2026-01-28 18:39:08.083375891 +0000 UTC m=+1558.909938712" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.981050 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:39:08 crc kubenswrapper[4985]: I0128 18:39:08.981608 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.034999 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.039945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.081123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerStarted","Data":"4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100694 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerID="6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100803 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fe11ac1b-2633-40fd-b359-01d3309299a8","Type":"ContainerDied","Data":"831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.100814 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="831d830f0ce8de8c61fae9ceebb6944114447b863f9b44abf86e65cce9b70907" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.113578 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerStarted","Data":"93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.118206 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-jqvzw" podStartSLOduration=3.118182096 podStartE2EDuration="3.118182096s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.109655815 +0000 UTC m=+1559.936218636" watchObservedRunningTime="2026-01-28 18:39:09.118182096 +0000 UTC m=+1559.944744937" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.129693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerStarted","Data":"c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.129756 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerStarted","Data":"9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.139538 4985 generic.go:334] "Generic (PLEG): container finished" podID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerID="d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.139628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerDied","Data":"d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.143118 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" podStartSLOduration=3.143095849 podStartE2EDuration="3.143095849s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.133122437 +0000 UTC m=+1559.959685288" watchObservedRunningTime="2026-01-28 18:39:09.143095849 +0000 UTC m=+1559.969658670" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.156517 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerID="6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241" exitCode=0 Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.156709 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerDied","Data":"6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241"} Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.157443 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.157464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.178365 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" podStartSLOduration=3.178347664 podStartE2EDuration="3.178347664s" podCreationTimestamp="2026-01-28 18:39:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:09.153613626 +0000 UTC m=+1559.980176437" watchObservedRunningTime="2026-01-28 18:39:09.178347664 +0000 UTC m=+1560.004910485" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.320580 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323143 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323193 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323416 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323532 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.323594 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") pod \"fe11ac1b-2633-40fd-b359-01d3309299a8\" (UID: \"fe11ac1b-2633-40fd-b359-01d3309299a8\") " Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.335106 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.339541 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh" (OuterVolumeSpecName: "kube-api-access-qbqvh") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "kube-api-access-qbqvh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.339947 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.354373 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts" (OuterVolumeSpecName: "scripts") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.421451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426732 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbqvh\" (UniqueName: \"kubernetes.io/projected/fe11ac1b-2633-40fd-b359-01d3309299a8-kube-api-access-qbqvh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426774 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426787 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426799 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fe11ac1b-2633-40fd-b359-01d3309299a8-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.426809 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.494034 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.525343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data" (OuterVolumeSpecName: "config-data") pod "fe11ac1b-2633-40fd-b359-01d3309299a8" (UID: "fe11ac1b-2633-40fd-b359-01d3309299a8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.529372 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:09 crc kubenswrapper[4985]: I0128 18:39:09.529404 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fe11ac1b-2633-40fd-b359-01d3309299a8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.192020 4985 generic.go:334] "Generic (PLEG): container finished" podID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.192118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerDied","Data":"18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.202070 4985 generic.go:334] "Generic (PLEG): container finished" podID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerID="93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.202131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerDied","Data":"93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.206146 4985 generic.go:334] "Generic (PLEG): container finished" podID="75ac3925-bebe-4c63-999f-073386005723" containerID="c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.206220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerDied","Data":"c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.208735 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerID="c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.208789 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerDied","Data":"c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214722 4985 generic.go:334] "Generic (PLEG): container finished" podID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerID="4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7" exitCode=0 Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.214887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerDied","Data":"4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7"} Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.412096 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.431622 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.445928 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453629 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453687 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453717 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453723 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453734 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453739 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: E0128 18:39:10.453756 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.453762 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454590 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="sg-core" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454614 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-central-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="proxy-httpd" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.454641 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" containerName="ceilometer-notification-agent" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.456665 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.456779 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.460978 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.468781 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489470 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489568 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.489839 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.490193 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.490286 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594061 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594336 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594383 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594446 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.594534 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.605240 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.605580 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.610406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.615931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.639652 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.640820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.647190 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"ceilometer-0\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.785459 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.806319 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.812981 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") pod \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901273 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") pod \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\" (UID: \"dc09e699-e5ce-4e02-b3ae-ce43d120e70d\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.901433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") pod \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\" (UID: \"0db5c7c8-1c53-42d0-8e23-f1cba882d552\") " Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.903784 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc09e699-e5ce-4e02-b3ae-ce43d120e70d" (UID: "dc09e699-e5ce-4e02-b3ae-ce43d120e70d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.907668 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.907749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll" (OuterVolumeSpecName: "kube-api-access-ghxll") pod "dc09e699-e5ce-4e02-b3ae-ce43d120e70d" (UID: "dc09e699-e5ce-4e02-b3ae-ce43d120e70d"). InnerVolumeSpecName "kube-api-access-ghxll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.910433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6" (OuterVolumeSpecName: "kube-api-access-tkrx6") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "kube-api-access-tkrx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.940517 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.958747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.980080 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data" (OuterVolumeSpecName: "config-data") pod "0db5c7c8-1c53-42d0-8e23-f1cba882d552" (UID: "0db5c7c8-1c53-42d0-8e23-f1cba882d552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.999561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:10 crc kubenswrapper[4985]: I0128 18:39:10.999602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") pod \"52f84c63-5719-4c32-bbc7-d7960fe35d35\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003576 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") pod \"52f84c63-5719-4c32-bbc7-d7960fe35d35\" (UID: \"52f84c63-5719-4c32-bbc7-d7960fe35d35\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.003940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "52f84c63-5719-4c32-bbc7-d7960fe35d35" (UID: "52f84c63-5719-4c32-bbc7-d7960fe35d35"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004483 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004507 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkrx6\" (UniqueName: \"kubernetes.io/projected/0db5c7c8-1c53-42d0-8e23-f1cba882d552-kube-api-access-tkrx6\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004659 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghxll\" (UniqueName: \"kubernetes.io/projected/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-kube-api-access-ghxll\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004685 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0db5c7c8-1c53-42d0-8e23-f1cba882d552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004697 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc09e699-e5ce-4e02-b3ae-ce43d120e70d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.004707 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/52f84c63-5719-4c32-bbc7-d7960fe35d35-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.007123 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh" (OuterVolumeSpecName: "kube-api-access-xrdwh") pod "52f84c63-5719-4c32-bbc7-d7960fe35d35" (UID: "52f84c63-5719-4c32-bbc7-d7960fe35d35"). InnerVolumeSpecName "kube-api-access-xrdwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.046544 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.049779 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.106003 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrdwh\" (UniqueName: \"kubernetes.io/projected/52f84c63-5719-4c32-bbc7-d7960fe35d35-kube-api-access-xrdwh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.186413 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.186468 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231525 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" event={"ID":"0db5c7c8-1c53-42d0-8e23-f1cba882d552","Type":"ContainerDied","Data":"2e057514ac41ec70a53f671ee0d42894f4f84f59f4823dfd07fa681695ec78b8"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231898 4985 scope.go:117] "RemoveContainer" containerID="18166ef32a4ee4d9d0c0b80bd4417d68d024bef50c3952f850b0c2bf8c48670b" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.231563 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5b5c69f9bd-9jvz9" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mzbqq" event={"ID":"52f84c63-5719-4c32-bbc7-d7960fe35d35","Type":"ContainerDied","Data":"2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233558 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2385680eb64658fe07f8aa3ec16ec314498bd3d6feafc53834fb4c2d568c94ea" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.233604 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mzbqq" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245593 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-tq8xx" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-tq8xx" event={"ID":"dc09e699-e5ce-4e02-b3ae-ce43d120e70d","Type":"ContainerDied","Data":"dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e"} Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.245942 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcf8630afc437b357fee41d6f6f5be42746432e209b3afa7319f44eff59c5a8e" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.249668 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.249709 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.301048 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe11ac1b-2633-40fd-b359-01d3309299a8" path="/var/lib/kubelet/pods/fe11ac1b-2633-40fd-b359-01d3309299a8/volumes" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.358315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.399725 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.415049 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-5b5c69f9bd-9jvz9"] Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.574638 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.720216 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") pod \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.722145 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") pod \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\" (UID: \"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae\") " Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.738062 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" (UID: "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.778178 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49" (OuterVolumeSpecName: "kube-api-access-fbv49") pod "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" (UID: "253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae"). InnerVolumeSpecName "kube-api-access-fbv49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.845048 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbv49\" (UniqueName: \"kubernetes.io/projected/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-kube-api-access-fbv49\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:11 crc kubenswrapper[4985]: I0128 18:39:11.845129 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.056183 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.078586 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.080357 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.114036 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" probeResult="failure" output=< Jan 28 18:39:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:39:12 crc kubenswrapper[4985]: > Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152342 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") pod \"dc08dbb5-2423-4fe9-8c21-a668459cad74\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152393 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") pod \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152424 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") pod \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\" (UID: \"b4efe2ca-1bc9-40db-944e-fb86222e4f98\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") pod \"75ac3925-bebe-4c63-999f-073386005723\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152633 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") pod \"dc08dbb5-2423-4fe9-8c21-a668459cad74\" (UID: \"dc08dbb5-2423-4fe9-8c21-a668459cad74\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.152804 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") pod \"75ac3925-bebe-4c63-999f-073386005723\" (UID: \"75ac3925-bebe-4c63-999f-073386005723\") " Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.154629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b4efe2ca-1bc9-40db-944e-fb86222e4f98" (UID: "b4efe2ca-1bc9-40db-944e-fb86222e4f98"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.155617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "75ac3925-bebe-4c63-999f-073386005723" (UID: "75ac3925-bebe-4c63-999f-073386005723"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.157501 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc08dbb5-2423-4fe9-8c21-a668459cad74" (UID: "dc08dbb5-2423-4fe9-8c21-a668459cad74"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.161780 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl" (OuterVolumeSpecName: "kube-api-access-22ppl") pod "dc08dbb5-2423-4fe9-8c21-a668459cad74" (UID: "dc08dbb5-2423-4fe9-8c21-a668459cad74"). InnerVolumeSpecName "kube-api-access-22ppl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.162026 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c" (OuterVolumeSpecName: "kube-api-access-q4f6c") pod "b4efe2ca-1bc9-40db-944e-fb86222e4f98" (UID: "b4efe2ca-1bc9-40db-944e-fb86222e4f98"). InnerVolumeSpecName "kube-api-access-q4f6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.169353 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs" (OuterVolumeSpecName: "kube-api-access-dclxs") pod "75ac3925-bebe-4c63-999f-073386005723" (UID: "75ac3925-bebe-4c63-999f-073386005723"). InnerVolumeSpecName "kube-api-access-dclxs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.256816 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc08dbb5-2423-4fe9-8c21-a668459cad74-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257759 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4f6c\" (UniqueName: \"kubernetes.io/projected/b4efe2ca-1bc9-40db-944e-fb86222e4f98-kube-api-access-q4f6c\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257830 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b4efe2ca-1bc9-40db-944e-fb86222e4f98-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257891 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dclxs\" (UniqueName: \"kubernetes.io/projected/75ac3925-bebe-4c63-999f-073386005723-kube-api-access-dclxs\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.257965 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-22ppl\" (UniqueName: \"kubernetes.io/projected/dc08dbb5-2423-4fe9-8c21-a668459cad74-kube-api-access-22ppl\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.258065 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/75ac3925-bebe-4c63-999f-073386005723-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.270950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" event={"ID":"b4efe2ca-1bc9-40db-944e-fb86222e4f98","Type":"ContainerDied","Data":"416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.270989 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="416cc2721f188704e4b7cf003f51e6d2dd0f4f7385c280dfd7b1160d868cf686" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.271047 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-b80b-account-create-update-mrvzq" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.279841 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.280859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7b9a-account-create-update-l47bt" event={"ID":"75ac3925-bebe-4c63-999f-073386005723","Type":"ContainerDied","Data":"9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.280933 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ccd623fbd6d8642ac8136c5acacb7e7c9cc2077b957698537cbb98c6ec3d29f" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.285990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-f01b-account-create-update-b985r" event={"ID":"dc08dbb5-2423-4fe9-8c21-a668459cad74","Type":"ContainerDied","Data":"2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.286046 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f7100e35d20ce823fe4fe7825216761e75e5f418f773220ca819bd86ab62de6" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.286116 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-f01b-account-create-update-b985r" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.292880 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-jqvzw" event={"ID":"253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae","Type":"ContainerDied","Data":"a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.292927 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a68b313338833953d1d9cc02ae7888a7ecbd0081546779d13fd6e917a1c90e05" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.293019 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-jqvzw" Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.314631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} Jan 28 18:39:12 crc kubenswrapper[4985]: I0128 18:39:12.314902 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"426d361783f148b2f6c2b7e23079a36d36f18ddb17a5125f59aee3cbdab7bba2"} Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.138013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.138646 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.144396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.297605 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" path="/var/lib/kubelet/pods/0db5c7c8-1c53-42d0-8e23-f1cba882d552/volumes" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345611 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345636 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:13 crc kubenswrapper[4985]: I0128 18:39:13.345631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.135712 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.380434 4985 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.381571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} Jan 28 18:39:14 crc kubenswrapper[4985]: I0128 18:39:14.703690 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.125115 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.405309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerStarted","Data":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.406827 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.425688 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.5541913320000003 podStartE2EDuration="6.425670563s" podCreationTimestamp="2026-01-28 18:39:10 +0000 UTC" firstStartedPulling="2026-01-28 18:39:11.44083546 +0000 UTC m=+1562.267398281" lastFinishedPulling="2026-01-28 18:39:15.312314691 +0000 UTC m=+1566.138877512" observedRunningTime="2026-01-28 18:39:16.421397112 +0000 UTC m=+1567.247959933" watchObservedRunningTime="2026-01-28 18:39:16.425670563 +0000 UTC m=+1567.252233384" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.973143 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979164 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979192 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979217 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979225 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979236 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979259 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979280 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979309 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979317 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979334 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979340 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: E0128 18:39:16.979353 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979360 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979652 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979679 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979695 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db5c7c8-1c53-42d0-8e23-f1cba882d552" containerName="heat-engine" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979708 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979723 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="75ac3925-bebe-4c63-999f-073386005723" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979744 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" containerName="mariadb-account-create-update" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.979756 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" containerName="mariadb-database-create" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.980778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.984703 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bk5t" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.984897 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 18:39:16 crc kubenswrapper[4985]: I0128 18:39:16.985008 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.025492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.086955 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087047 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.087721 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.189896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.189989 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.190034 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.190134 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.197214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.204038 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.208731 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.211581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"nova-cell0-conductor-db-sync-wnljz\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.329792 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.420746 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" containerID="cri-o://d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421330 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" containerID="cri-o://f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421370 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" containerID="cri-o://ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" gracePeriod=30 Jan 28 18:39:17 crc kubenswrapper[4985]: I0128 18:39:17.421425 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" containerID="cri-o://5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" gracePeriod=30 Jan 28 18:39:18 crc kubenswrapper[4985]: W0128 18:39:18.000717 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf5e9657_f657_4f0e_9d46_31c6942e70d2.slice/crio-7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a WatchSource:0}: Error finding container 7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a: Status 404 returned error can't find the container with id 7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.006151 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.431716 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerStarted","Data":"7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434489 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" exitCode=0 Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434540 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" exitCode=2 Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434531 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} Jan 28 18:39:18 crc kubenswrapper[4985]: I0128 18:39:18.434550 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" exitCode=0 Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.118407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.177757 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:21 crc kubenswrapper[4985]: I0128 18:39:21.532900 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:22 crc kubenswrapper[4985]: I0128 18:39:22.480227 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-mbtp6" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" containerID="cri-o://fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" gracePeriod=2 Jan 28 18:39:23 crc kubenswrapper[4985]: I0128 18:39:23.493881 4985 generic.go:334] "Generic (PLEG): container finished" podID="1ebe025a-cece-4723-928f-b6649ea27040" containerID="fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" exitCode=0 Jan 28 18:39:23 crc kubenswrapper[4985]: I0128 18:39:23.494351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.244513 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363720 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363808 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.363894 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") pod \"1ebe025a-cece-4723-928f-b6649ea27040\" (UID: \"1ebe025a-cece-4723-928f-b6649ea27040\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.364336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities" (OuterVolumeSpecName: "utilities") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.367878 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.373013 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99" (OuterVolumeSpecName: "kube-api-access-qll99") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "kube-api-access-qll99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.411570 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.471142 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qll99\" (UniqueName: \"kubernetes.io/projected/1ebe025a-cece-4723-928f-b6649ea27040-kube-api-access-qll99\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.510570 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ebe025a-cece-4723-928f-b6649ea27040" (UID: "1ebe025a-cece-4723-928f-b6649ea27040"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.547012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerStarted","Data":"ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.549927 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-mbtp6" event={"ID":"1ebe025a-cece-4723-928f-b6649ea27040","Type":"ContainerDied","Data":"cb6d06c38f976feb1cb400142c94c846180c10a5200e7df25e3c5053c66cb609"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.549982 4985 scope.go:117] "RemoveContainer" containerID="fce548919236fde4eb5c4991efb646d47ab79f3a48995a81bc461b9b6f0a9077" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.550120 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-mbtp6" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562764 4985 generic.go:334] "Generic (PLEG): container finished" podID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" exitCode=0 Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562847 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562858 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.562878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f65f780c-a6a6-4e63-a21c-962724bb8c56","Type":"ContainerDied","Data":"426d361783f148b2f6c2b7e23079a36d36f18ddb17a5125f59aee3cbdab7bba2"} Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.567675 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-wnljz" podStartSLOduration=2.586647104 podStartE2EDuration="11.567636977s" podCreationTimestamp="2026-01-28 18:39:16 +0000 UTC" firstStartedPulling="2026-01-28 18:39:18.003980063 +0000 UTC m=+1568.830542884" lastFinishedPulling="2026-01-28 18:39:26.984969936 +0000 UTC m=+1577.811532757" observedRunningTime="2026-01-28 18:39:27.566042092 +0000 UTC m=+1578.392604923" watchObservedRunningTime="2026-01-28 18:39:27.567636977 +0000 UTC m=+1578.394199798" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.575444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576031 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576092 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576168 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576215 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576255 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.576296 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") pod \"f65f780c-a6a6-4e63-a21c-962724bb8c56\" (UID: \"f65f780c-a6a6-4e63-a21c-962724bb8c56\") " Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.578389 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ebe025a-cece-4723-928f-b6649ea27040-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.579461 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.579486 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.597534 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh" (OuterVolumeSpecName: "kube-api-access-k9frh") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "kube-api-access-k9frh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.601797 4985 scope.go:117] "RemoveContainer" containerID="ac4c636c19c5a93172c99e41217794568a75dad0ad348a3d4022d6d7bcdfe984" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.604467 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts" (OuterVolumeSpecName: "scripts") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.611683 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.619285 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.628907 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-mbtp6"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.632102 4985 scope.go:117] "RemoveContainer" containerID="c90878479aa212272619165fb9e5e236c18feef83564d0b2ea60daad9b1b13ff" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.668604 4985 scope.go:117] "RemoveContainer" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.674609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680775 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680816 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680832 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680846 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9frh\" (UniqueName: \"kubernetes.io/projected/f65f780c-a6a6-4e63-a21c-962724bb8c56-kube-api-access-k9frh\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680858 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.680871 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f65f780c-a6a6-4e63-a21c-962724bb8c56-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.689660 4985 scope.go:117] "RemoveContainer" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.712832 4985 scope.go:117] "RemoveContainer" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.733831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data" (OuterVolumeSpecName: "config-data") pod "f65f780c-a6a6-4e63-a21c-962724bb8c56" (UID: "f65f780c-a6a6-4e63-a21c-962724bb8c56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.737581 4985 scope.go:117] "RemoveContainer" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.762787 4985 scope.go:117] "RemoveContainer" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.763627 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": container with ID starting with ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843 not found: ID does not exist" containerID="ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.763666 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843"} err="failed to get container status \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": rpc error: code = NotFound desc = could not find container \"ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843\": container with ID starting with ec44ed53fb94bebf1d6e21b8f7e6d11dff648923dd53959468e8fb6402e52843 not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.763695 4985 scope.go:117] "RemoveContainer" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.764146 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": container with ID starting with 5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b not found: ID does not exist" containerID="5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764199 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b"} err="failed to get container status \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": rpc error: code = NotFound desc = could not find container \"5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b\": container with ID starting with 5e40586ac353c4c5635f170cc4467cbe7e8abe365a6e3a724d55cc6c3775c87b not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764253 4985 scope.go:117] "RemoveContainer" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.764632 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": container with ID starting with f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531 not found: ID does not exist" containerID="f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764680 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531"} err="failed to get container status \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": rpc error: code = NotFound desc = could not find container \"f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531\": container with ID starting with f835fe64b8b64ecdb4f33eb98670d785d7b14b2ae8f1e5448b8c1f3f26149531 not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.764731 4985 scope.go:117] "RemoveContainer" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: E0128 18:39:27.765602 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": container with ID starting with d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a not found: ID does not exist" containerID="d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.765634 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a"} err="failed to get container status \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": rpc error: code = NotFound desc = could not find container \"d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a\": container with ID starting with d40f281cf0efe1517351ecd945fa64f89eb1b80b88bcebcf48062539663f584a not found: ID does not exist" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.783484 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f65f780c-a6a6-4e63-a21c-962724bb8c56-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.953751 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:27 crc kubenswrapper[4985]: I0128 18:39:27.990269 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.011719 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.012684 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.012796 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.012921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013000 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013098 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-content" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013176 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-content" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013282 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013377 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013561 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013651 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013736 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: E0128 18:39:28.013857 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-utilities" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.013942 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="extract-utilities" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014315 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ebe025a-cece-4723-928f-b6649ea27040" containerName="registry-server" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014432 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-notification-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014542 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="sg-core" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014627 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="proxy-httpd" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.014715 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" containerName="ceilometer-central-agent" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.018042 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.020770 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.020936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.023198 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202235 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202539 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202672 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202766 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.202824 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304861 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304943 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.304978 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.305606 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.310097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.311785 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.313523 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.313954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.322562 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"ceilometer-0\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " pod="openstack/ceilometer-0" Jan 28 18:39:28 crc kubenswrapper[4985]: I0128 18:39:28.336539 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:28.871977 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:29 crc kubenswrapper[4985]: W0128 18:39:28.875934 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfafcaaa1_299d_4b1a_945c_d6c06e9f9a17.slice/crio-c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c WatchSource:0}: Error finding container c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c: Status 404 returned error can't find the container with id c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.277426 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ebe025a-cece-4723-928f-b6649ea27040" path="/var/lib/kubelet/pods/1ebe025a-cece-4723-928f-b6649ea27040/volumes" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.278666 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65f780c-a6a6-4e63-a21c-962724bb8c56" path="/var/lib/kubelet/pods/f65f780c-a6a6-4e63-a21c-962724bb8c56/volumes" Jan 28 18:39:29 crc kubenswrapper[4985]: I0128 18:39:29.628307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c"} Jan 28 18:39:30 crc kubenswrapper[4985]: I0128 18:39:30.642818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2"} Jan 28 18:39:30 crc kubenswrapper[4985]: I0128 18:39:30.643321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877"} Jan 28 18:39:31 crc kubenswrapper[4985]: I0128 18:39:31.658611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4"} Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.695233 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerStarted","Data":"602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c"} Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.695861 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:34 crc kubenswrapper[4985]: I0128 18:39:34.727592 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.438711212 podStartE2EDuration="7.727568958s" podCreationTimestamp="2026-01-28 18:39:27 +0000 UTC" firstStartedPulling="2026-01-28 18:39:28.878110645 +0000 UTC m=+1579.704673466" lastFinishedPulling="2026-01-28 18:39:34.166968391 +0000 UTC m=+1584.993531212" observedRunningTime="2026-01-28 18:39:34.720031645 +0000 UTC m=+1585.546594466" watchObservedRunningTime="2026-01-28 18:39:34.727568958 +0000 UTC m=+1585.554131779" Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.384131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.384776 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" containerID="cri-o://0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.385918 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" containerID="cri-o://448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.385936 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" containerID="cri-o://83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.386377 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" containerID="cri-o://602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" gracePeriod=30 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739033 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" exitCode=0 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739313 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" exitCode=2 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739401 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" exitCode=0 Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c"} Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739580 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4"} Jan 28 18:39:37 crc kubenswrapper[4985]: I0128 18:39:37.739652 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2"} Jan 28 18:39:40 crc kubenswrapper[4985]: I0128 18:39:40.777021 4985 generic.go:334] "Generic (PLEG): container finished" podID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerID="0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" exitCode=0 Jan 28 18:39:40 crc kubenswrapper[4985]: I0128 18:39:40.777104 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.134102 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186015 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186076 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186121 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.186982 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.187041 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" gracePeriod=600 Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226643 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226784 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.226976 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227097 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227154 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227233 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227240 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227328 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") pod \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\" (UID: \"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17\") " Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.227982 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.228060 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.233954 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5" (OuterVolumeSpecName: "kube-api-access-kmjp5") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "kube-api-access-kmjp5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.234513 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts" (OuterVolumeSpecName: "scripts") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.280776 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.316609 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331483 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kmjp5\" (UniqueName: \"kubernetes.io/projected/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-kube-api-access-kmjp5\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331545 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331559 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.331572 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.332528 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.395207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data" (OuterVolumeSpecName: "config-data") pod "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" (UID: "fafcaaa1-299d-4b1a-945c-d6c06e9f9a17"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.434202 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.434477 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791771 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"fafcaaa1-299d-4b1a-945c-d6c06e9f9a17","Type":"ContainerDied","Data":"c176f3db693db26b4b11e7d279bdbb6d8155e787e284d88654ff8d9cec7a895c"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791825 4985 scope.go:117] "RemoveContainer" containerID="602dc3529f5f964e2a5109933c2fcd4ae1318fc03d0fc85357efd26a6f89a33c" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.791853 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796058 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" exitCode=0 Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796094 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108"} Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.796519 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.797998 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.832455 4985 scope.go:117] "RemoveContainer" containerID="448b8e6a5d87ea9a4baa28189d60e2366cd52810b2e6cb329f7855ad524e2ac4" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.869679 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.881937 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.913982 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.914921 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.914949 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.914966 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.914974 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.915006 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915014 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: E0128 18:39:41.915047 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915054 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915354 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-central-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915385 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="proxy-httpd" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915404 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="sg-core" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.915416 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" containerName="ceilometer-notification-agent" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.916003 4985 scope.go:117] "RemoveContainer" containerID="83b905d7d95bc6cd0981a583594161bc4777deea33d9b61625db86d913647db2" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.918591 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.923300 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.923566 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.955174 4985 scope.go:117] "RemoveContainer" containerID="0376e2b9794b6228890e4d1bb0e26eaf2787c09895a2b741d2221058843f9877" Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.969190 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:41 crc kubenswrapper[4985]: I0128 18:39:41.980399 4985 scope.go:117] "RemoveContainer" containerID="236f8e60379b001866be409982622e544b3bacd0bbfad449b9eb94ab9c19400a" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046525 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046806 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.046990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.047016 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.149164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.149903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150044 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150212 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150619 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150721 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.150827 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.151817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.151832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.159351 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.160738 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.164761 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.168463 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.171721 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.249854 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:42 crc kubenswrapper[4985]: I0128 18:39:42.795483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:42 crc kubenswrapper[4985]: W0128 18:39:42.796482 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2c9e260_5f3f_4c90_a567_384b852ce092.slice/crio-ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc WatchSource:0}: Error finding container ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc: Status 404 returned error can't find the container with id ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.282008 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fafcaaa1-299d-4b1a-945c-d6c06e9f9a17" path="/var/lib/kubelet/pods/fafcaaa1-299d-4b1a-945c-d6c06e9f9a17/volumes" Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.826736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292"} Jan 28 18:39:43 crc kubenswrapper[4985]: I0128 18:39:43.827095 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc"} Jan 28 18:39:44 crc kubenswrapper[4985]: I0128 18:39:44.843556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6"} Jan 28 18:39:45 crc kubenswrapper[4985]: I0128 18:39:45.859081 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056"} Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.888217 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerStarted","Data":"d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b"} Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.888885 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:39:47 crc kubenswrapper[4985]: I0128 18:39:47.916511 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.134068043 podStartE2EDuration="6.916487182s" podCreationTimestamp="2026-01-28 18:39:41 +0000 UTC" firstStartedPulling="2026-01-28 18:39:42.798961852 +0000 UTC m=+1593.625524673" lastFinishedPulling="2026-01-28 18:39:47.581380991 +0000 UTC m=+1598.407943812" observedRunningTime="2026-01-28 18:39:47.908527707 +0000 UTC m=+1598.735090528" watchObservedRunningTime="2026-01-28 18:39:47.916487182 +0000 UTC m=+1598.743050003" Jan 28 18:39:49 crc kubenswrapper[4985]: I0128 18:39:49.919799 4985 generic.go:334] "Generic (PLEG): container finished" podID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerID="ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6" exitCode=0 Jan 28 18:39:49 crc kubenswrapper[4985]: I0128 18:39:49.919928 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerDied","Data":"ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.688080 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.696535 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772347 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772702 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772940 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.772985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773033 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773066 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") pod \"89fc2c75-41eb-441e-a171-5c716b823277\" (UID: \"89fc2c75-41eb-441e-a171-5c716b823277\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.773181 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.781666 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.781783 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.784593 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n" (OuterVolumeSpecName: "kube-api-access-cp56n") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "kube-api-access-cp56n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.785294 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd" (OuterVolumeSpecName: "kube-api-access-79bgd") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "kube-api-access-79bgd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.823426 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.848637 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.857207 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data" (OuterVolumeSpecName: "config-data") pod "89fc2c75-41eb-441e-a171-5c716b823277" (UID: "89fc2c75-41eb-441e-a171-5c716b823277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.874310 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data" (OuterVolumeSpecName: "config-data") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875170 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") pod \"1373681b-8290-4963-897b-b5b27690e19a\" (UID: \"1373681b-8290-4963-897b-b5b27690e19a\") " Jan 28 18:39:50 crc kubenswrapper[4985]: W0128 18:39:50.875305 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/1373681b-8290-4963-897b-b5b27690e19a/volumes/kubernetes.io~secret/config-data Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875327 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data" (OuterVolumeSpecName: "config-data") pod "1373681b-8290-4963-897b-b5b27690e19a" (UID: "1373681b-8290-4963-897b-b5b27690e19a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875917 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cp56n\" (UniqueName: \"kubernetes.io/projected/1373681b-8290-4963-897b-b5b27690e19a-kube-api-access-cp56n\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875946 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875955 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875965 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875974 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875985 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-79bgd\" (UniqueName: \"kubernetes.io/projected/89fc2c75-41eb-441e-a171-5c716b823277-kube-api-access-79bgd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.875997 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/89fc2c75-41eb-441e-a171-5c716b823277-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.876005 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1373681b-8290-4963-897b-b5b27690e19a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936652 4985 generic.go:334] "Generic (PLEG): container finished" podID="1373681b-8290-4963-897b-b5b27690e19a" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" exitCode=137 Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936744 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerDied","Data":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936784 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" event={"ID":"1373681b-8290-4963-897b-b5b27690e19a","Type":"ContainerDied","Data":"7f8aaec146afdcb274b6be4540ed468073cb056ab2a74bd69ec462b02099487a"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936702 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-84b7b4c956-xs5qg" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.936809 4985 scope.go:117] "RemoveContainer" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.951869 4985 generic.go:334] "Generic (PLEG): container finished" podID="89fc2c75-41eb-441e-a171-5c716b823277" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" exitCode=137 Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerDied","Data":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-5965d558dc-cg7wv" event={"ID":"89fc2c75-41eb-441e-a171-5c716b823277","Type":"ContainerDied","Data":"af15e77d0cac085450dbdbf09aea29f94aab86926bae124219c8abb6e3a9c5c2"} Jan 28 18:39:50 crc kubenswrapper[4985]: I0128 18:39:50.952371 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-5965d558dc-cg7wv" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.010944 4985 scope.go:117] "RemoveContainer" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:51 crc kubenswrapper[4985]: E0128 18:39:51.015922 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": container with ID starting with 0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a not found: ID does not exist" containerID="0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.016096 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a"} err="failed to get container status \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": rpc error: code = NotFound desc = could not find container \"0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a\": container with ID starting with 0ff5bfbbba21089d87c94e567299222b66e5a5a3ee11e8de3620293fa94c878a not found: ID does not exist" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.016130 4985 scope.go:117] "RemoveContainer" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.037342 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.061568 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-84b7b4c956-xs5qg"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.081631 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.098595 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-5965d558dc-cg7wv"] Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.132654 4985 scope.go:117] "RemoveContainer" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: E0128 18:39:51.133459 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": container with ID starting with 06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00 not found: ID does not exist" containerID="06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.133515 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00"} err="failed to get container status \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": rpc error: code = NotFound desc = could not find container \"06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00\": container with ID starting with 06e237f2681fbaac8f516b43627a27f54e355908f049b878940a3c0181b25a00 not found: ID does not exist" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.318375 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1373681b-8290-4963-897b-b5b27690e19a" path="/var/lib/kubelet/pods/1373681b-8290-4963-897b-b5b27690e19a/volumes" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.319749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89fc2c75-41eb-441e-a171-5c716b823277" path="/var/lib/kubelet/pods/89fc2c75-41eb-441e-a171-5c716b823277/volumes" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.695903 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.825705 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.825886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.826026 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.826069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") pod \"df5e9657-f657-4f0e-9d46-31c6942e70d2\" (UID: \"df5e9657-f657-4f0e-9d46-31c6942e70d2\") " Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.834849 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb" (OuterVolumeSpecName: "kube-api-access-8gpjb") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "kube-api-access-8gpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.871448 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts" (OuterVolumeSpecName: "scripts") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.917230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data" (OuterVolumeSpecName: "config-data") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.934500 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8gpjb\" (UniqueName: \"kubernetes.io/projected/df5e9657-f657-4f0e-9d46-31c6942e70d2-kube-api-access-8gpjb\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935166 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935325 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.935017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df5e9657-f657-4f0e-9d46-31c6942e70d2" (UID: "df5e9657-f657-4f0e-9d46-31c6942e70d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.991987 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-wnljz" Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.992016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-wnljz" event={"ID":"df5e9657-f657-4f0e-9d46-31c6942e70d2","Type":"ContainerDied","Data":"7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a"} Jan 28 18:39:51 crc kubenswrapper[4985]: I0128 18:39:51.992062 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c964c71fbf53a73e02f741a55147e78ae61c3acf98bc98cef2fafebf5b6d13a" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.037406 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df5e9657-f657-4f0e-9d46-31c6942e70d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.076569 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077356 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077444 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077518 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077581 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: E0128 18:39:52.077681 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.077743 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078126 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fc2c75-41eb-441e-a171-5c716b823277" containerName="heat-api" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078205 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1373681b-8290-4963-897b-b5b27690e19a" containerName="heat-cfnapi" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.078303 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" containerName="nova-cell0-conductor-db-sync" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.079666 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.084480 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5bk5t" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.085365 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.118861 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139299 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139368 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.139466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242912 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.242979 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.249811 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.251040 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/78b595e2-b61a-4921-8d69-28adfa53f6bb-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.274406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mr24j\" (UniqueName: \"kubernetes.io/projected/78b595e2-b61a-4921-8d69-28adfa53f6bb-kube-api-access-mr24j\") pod \"nova-cell0-conductor-0\" (UID: \"78b595e2-b61a-4921-8d69-28adfa53f6bb\") " pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.434773 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:52 crc kubenswrapper[4985]: W0128 18:39:52.958845 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod78b595e2_b61a_4921_8d69_28adfa53f6bb.slice/crio-56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9 WatchSource:0}: Error finding container 56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9: Status 404 returned error can't find the container with id 56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9 Jan 28 18:39:52 crc kubenswrapper[4985]: I0128 18:39:52.961045 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 18:39:53 crc kubenswrapper[4985]: I0128 18:39:53.010666 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78b595e2-b61a-4921-8d69-28adfa53f6bb","Type":"ContainerStarted","Data":"56d7b474e9e05b7e9f81054b94df470cc8b190149f659e85ea39f20d4f2ba2e9"} Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.025161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"78b595e2-b61a-4921-8d69-28adfa53f6bb","Type":"ContainerStarted","Data":"ba93ebf5042eedb0f2f0a021ef445a90bb3767dfa7ad40c16120aa4c3cbcf755"} Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.025790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 18:39:54 crc kubenswrapper[4985]: I0128 18:39:54.046701 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.046682191 podStartE2EDuration="2.046682191s" podCreationTimestamp="2026-01-28 18:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:39:54.042115302 +0000 UTC m=+1604.868678133" watchObservedRunningTime="2026-01-28 18:39:54.046682191 +0000 UTC m=+1604.873245012" Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037027 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037404 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" containerID="cri-o://9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037446 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" containerID="cri-o://c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037459 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" containerID="cri-o://a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.037512 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" containerID="cri-o://d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" gracePeriod=30 Jan 28 18:39:55 crc kubenswrapper[4985]: I0128 18:39:55.264491 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:39:55 crc kubenswrapper[4985]: E0128 18:39:55.265216 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046379 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046710 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" exitCode=2 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046722 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046731 4985 generic.go:334] "Generic (PLEG): container finished" podID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerID="9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" exitCode=0 Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046442 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046766 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046781 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.046793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292"} Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.806441 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851190 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851424 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851505 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851556 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851628 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.851672 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") pod \"d2c9e260-5f3f-4c90-a567-384b852ce092\" (UID: \"d2c9e260-5f3f-4c90-a567-384b852ce092\") " Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.853076 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.860312 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.886375 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47" (OuterVolumeSpecName: "kube-api-access-xxl47") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "kube-api-access-xxl47". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954474 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954519 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d2c9e260-5f3f-4c90-a567-384b852ce092-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.954534 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xxl47\" (UniqueName: \"kubernetes.io/projected/d2c9e260-5f3f-4c90-a567-384b852ce092-kube-api-access-xxl47\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.969843 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts" (OuterVolumeSpecName: "scripts") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:56 crc kubenswrapper[4985]: I0128 18:39:56.978428 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.051699 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057266 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057314 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.057326 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091110 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d2c9e260-5f3f-4c90-a567-384b852ce092","Type":"ContainerDied","Data":"ad59f3e71444e3331f4682b452af75124995ead8fabd303f85cb7e005460e9cc"} Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091183 4985 scope.go:117] "RemoveContainer" containerID="d33549903e378eb3f2c50c5fa055b35792cec086074c052966d40b8ef4df1d6b" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.091852 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.119605 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data" (OuterVolumeSpecName: "config-data") pod "d2c9e260-5f3f-4c90-a567-384b852ce092" (UID: "d2c9e260-5f3f-4c90-a567-384b852ce092"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.127756 4985 scope.go:117] "RemoveContainer" containerID="a76189df723ddef4048b2fa893a4c6ec36f2c8a3346dfe8bd8fc5384f88ec056" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.149324 4985 scope.go:117] "RemoveContainer" containerID="c945cbfbb90b3d9c0637bc1334eb04e9240f9d240e95b40212143dd3b57622f6" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.159442 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d2c9e260-5f3f-4c90-a567-384b852ce092-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.173610 4985 scope.go:117] "RemoveContainer" containerID="9c0b02ff2b6094e1fbd6d2a06391fd74bcc3b3f2cb8793a231a1aacfaa49b292" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.429928 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.442843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.453504 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.453983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454008 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454031 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454043 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454070 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454077 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: E0128 18:39:57.454108 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454117 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454340 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="proxy-httpd" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454373 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-notification-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454394 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="sg-core" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.454401 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" containerName="ceilometer-central-agent" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.456405 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.466816 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.466939 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.474306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570436 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570514 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570541 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570559 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570606 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.570833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673532 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673573 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673696 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673786 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673906 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.673951 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.674354 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.679345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.679438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.680503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.681973 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.692425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"ceilometer-0\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " pod="openstack/ceilometer-0" Jan 28 18:39:57 crc kubenswrapper[4985]: I0128 18:39:57.777706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:39:58 crc kubenswrapper[4985]: I0128 18:39:58.298586 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:39:58 crc kubenswrapper[4985]: I0128 18:39:58.308121 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:39:59 crc kubenswrapper[4985]: I0128 18:39:59.133118 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"cda0d3d7eb455e4b9ead99374175951ce213d2d28aa9402eeb2c7090c5991dcb"} Jan 28 18:39:59 crc kubenswrapper[4985]: I0128 18:39:59.285921 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2c9e260-5f3f-4c90-a567-384b852ce092" path="/var/lib/kubelet/pods/d2c9e260-5f3f-4c90-a567-384b852ce092/volumes" Jan 28 18:40:00 crc kubenswrapper[4985]: I0128 18:40:00.144039 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa"} Jan 28 18:40:00 crc kubenswrapper[4985]: I0128 18:40:00.144476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a"} Jan 28 18:40:02 crc kubenswrapper[4985]: I0128 18:40:02.190453 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b"} Jan 28 18:40:02 crc kubenswrapper[4985]: I0128 18:40:02.483152 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.033580 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.037414 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.044623 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.044759 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.048965 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133069 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133355 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133444 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.133548 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236674 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236729 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.236904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.293366 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.293406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.297814 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.300406 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"nova-cell0-cell-mapping-m82mm\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.357062 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.358696 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.362401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.363245 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.404052 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.441585 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.443226 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.455746 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.458991 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.501637 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.562484 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565129 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565188 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565210 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565303 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.565358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.583375 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.587076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.588456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.611523 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.747104 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.749130 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.749226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.750186 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751682 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751747 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.751833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.768305 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.770492 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.802082 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"nova-cell1-novncproxy-0\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.831531 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"nova-scheduler-0\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.863182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.864533 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879648 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.879751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.880362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.888784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.890041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.911150 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.953824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"nova-api-0\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " pod="openstack/nova-api-0" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.953904 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.962172 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.965001 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.969753 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-db-secret" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984726 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984884 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984922 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.984987 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.986011 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:03 crc kubenswrapper[4985]: I0128 18:40:03.989767 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:03.996980 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.003419 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.039018 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.039112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"aodh-db-create-jdztq\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.046115 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.050299 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.077558 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.084287 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.088620 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.089571 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.117309 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.118802 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128522 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128876 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.128989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.129071 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.129893 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.175570 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"aodh-682b-account-create-update-fphsf\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231114 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231188 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231304 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231545 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231602 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.231655 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.238928 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.249134 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.289506 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.291906 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"nova-metadata-0\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.305079 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337870 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337930 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.337955 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.338017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.339036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.339856 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340309 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340409 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340671 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.340903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.363451 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"dnsmasq-dns-568d7fd7cf-hjzhw\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.432145 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:04 crc kubenswrapper[4985]: I0128 18:40:04.559615 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.154225 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.157449 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.164084 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.166933 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.176892 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.276516 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.276652 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.283125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.283260 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.351308 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.377584 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.385697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.385856 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.386104 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.386175 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.396939 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.397380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.401348 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.405478 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.410910 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"nova-cell1-conductor-db-sync-rxz6k\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.477159 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerStarted","Data":"d67f49419ddc18736265dbf8231bcf89cd6ee9def418fabf88a409ff0a470ae3"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.491201 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerStarted","Data":"b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.491456 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.495319 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"6ab1f97ac874b54ef01c0179a3153dd1ba3d40d00482df2197af30281a5558ed"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.499863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerStarted","Data":"d8cf9fb9c6cec17cb1a2721de6a0e35c45b968fbf964f4ce2fc3f3f714ea3e1d"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.507068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerStarted","Data":"c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.507108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerStarted","Data":"c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e"} Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.533941 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.715276699 podStartE2EDuration="8.533912702s" podCreationTimestamp="2026-01-28 18:39:57 +0000 UTC" firstStartedPulling="2026-01-28 18:39:58.307839484 +0000 UTC m=+1609.134402305" lastFinishedPulling="2026-01-28 18:40:04.126475487 +0000 UTC m=+1614.953038308" observedRunningTime="2026-01-28 18:40:05.524240959 +0000 UTC m=+1616.350803780" watchObservedRunningTime="2026-01-28 18:40:05.533912702 +0000 UTC m=+1616.360475523" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.559862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-m82mm" podStartSLOduration=2.559839564 podStartE2EDuration="2.559839564s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:05.544495091 +0000 UTC m=+1616.371057912" watchObservedRunningTime="2026-01-28 18:40:05.559839564 +0000 UTC m=+1616.386402385" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.704037 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.805979 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:40:05 crc kubenswrapper[4985]: W0128 18:40:05.826441 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2578b35_7408_46ed_bcee_8b0ff114cd33.slice/crio-1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0 WatchSource:0}: Error finding container 1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0: Status 404 returned error can't find the container with id 1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0 Jan 28 18:40:05 crc kubenswrapper[4985]: W0128 18:40:05.826803 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36263e10_c8a1_46f3_8fbd_b19bf25c48f5.slice/crio-313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd WatchSource:0}: Error finding container 313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd: Status 404 returned error can't find the container with id 313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.837560 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.891099 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:05 crc kubenswrapper[4985]: I0128 18:40:05.943843 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.411944 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:40:06 crc kubenswrapper[4985]: W0128 18:40:06.454393 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddc545ce7_58a7_4757_8eab_8b0a28570a49.slice/crio-bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9 WatchSource:0}: Error finding container bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9: Status 404 returned error can't find the container with id bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9 Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.585437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerStarted","Data":"bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.598945 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerStarted","Data":"178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.599003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerStarted","Data":"1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.607364 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.619758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerStarted","Data":"b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.628004 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerStarted","Data":"c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d"} Jan 28 18:40:06 crc kubenswrapper[4985]: I0128 18:40:06.629558 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-create-jdztq" podStartSLOduration=3.629534743 podStartE2EDuration="3.629534743s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:06.615097856 +0000 UTC m=+1617.441660697" watchObservedRunningTime="2026-01-28 18:40:06.629534743 +0000 UTC m=+1617.456097564" Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.262690 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.281660 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.647095 4985 generic.go:334] "Generic (PLEG): container finished" podID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerID="382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.647532 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerDied","Data":"382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.653468 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerStarted","Data":"5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.660368 4985 generic.go:334] "Generic (PLEG): container finished" podID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerID="178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.660500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerDied","Data":"178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.673111 4985 generic.go:334] "Generic (PLEG): container finished" podID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerID="156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab" exitCode=0 Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.673176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab"} Jan 28 18:40:07 crc kubenswrapper[4985]: I0128 18:40:07.747857 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" podStartSLOduration=2.747838366 podStartE2EDuration="2.747838366s" podCreationTimestamp="2026-01-28 18:40:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:07.742449594 +0000 UTC m=+1618.569012415" watchObservedRunningTime="2026-01-28 18:40:07.747838366 +0000 UTC m=+1618.574401187" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.003779 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.150483 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") pod \"c2578b35-7408-46ed-bcee-8b0ff114cd33\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.150627 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") pod \"c2578b35-7408-46ed-bcee-8b0ff114cd33\" (UID: \"c2578b35-7408-46ed-bcee-8b0ff114cd33\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.152140 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c2578b35-7408-46ed-bcee-8b0ff114cd33" (UID: "c2578b35-7408-46ed-bcee-8b0ff114cd33"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.162157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb" (OuterVolumeSpecName: "kube-api-access-ct9fb") pod "c2578b35-7408-46ed-bcee-8b0ff114cd33" (UID: "c2578b35-7408-46ed-bcee-8b0ff114cd33"). InnerVolumeSpecName "kube-api-access-ct9fb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.254326 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c2578b35-7408-46ed-bcee-8b0ff114cd33-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.254372 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct9fb\" (UniqueName: \"kubernetes.io/projected/c2578b35-7408-46ed-bcee-8b0ff114cd33-kube-api-access-ct9fb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.264166 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:10 crc kubenswrapper[4985]: E0128 18:40:10.264705 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.414953 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.564915 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") pod \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.565234 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") pod \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\" (UID: \"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5\") " Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.566048 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" (UID: "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.574376 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf" (OuterVolumeSpecName: "kube-api-access-gjndf") pod "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" (UID: "21d5020b-3b33-4e6c-95dd-9aad46d3f0e5"). InnerVolumeSpecName "kube-api-access-gjndf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.668137 4985 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.668175 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjndf\" (UniqueName: \"kubernetes.io/projected/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5-kube-api-access-gjndf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.714917 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-682b-account-create-update-fphsf" event={"ID":"21d5020b-3b33-4e6c-95dd-9aad46d3f0e5","Type":"ContainerDied","Data":"c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.714966 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c4744d3e091d5fe137338eff5a0eae180d79a285c04bb7a04b679d4f0af6cc4d" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.715006 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-682b-account-create-update-fphsf" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721431 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-create-jdztq" event={"ID":"c2578b35-7408-46ed-bcee-8b0ff114cd33","Type":"ContainerDied","Data":"1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721478 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1101d2f73836c5aa3c89354862e987ca0831edcb43c759a121a3fdc6fa8510c0" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.721482 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-create-jdztq" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.725493 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerStarted","Data":"4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.725790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.728023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41"} Jan 28 18:40:10 crc kubenswrapper[4985]: I0128 18:40:10.768754 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" podStartSLOduration=7.768728812 podStartE2EDuration="7.768728812s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:10.757105534 +0000 UTC m=+1621.583668355" watchObservedRunningTime="2026-01-28 18:40:10.768728812 +0000 UTC m=+1621.595291633" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753070 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753609 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerStarted","Data":"0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753328 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" containerID="cri-o://8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.753120 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" containerID="cri-o://0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.757664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerStarted","Data":"7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.760213 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerStarted","Data":"8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.760295 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" gracePeriod=30 Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.766891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerStarted","Data":"4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff"} Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.805156 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=4.121407102 podStartE2EDuration="8.805133563s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.83807843 +0000 UTC m=+1616.664641251" lastFinishedPulling="2026-01-28 18:40:10.521804901 +0000 UTC m=+1621.348367712" observedRunningTime="2026-01-28 18:40:11.777545954 +0000 UTC m=+1622.604108805" watchObservedRunningTime="2026-01-28 18:40:11.805133563 +0000 UTC m=+1622.631696424" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.821900 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.7504197870000002 podStartE2EDuration="8.821877145s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.302126208 +0000 UTC m=+1616.128689029" lastFinishedPulling="2026-01-28 18:40:10.373583526 +0000 UTC m=+1621.200146387" observedRunningTime="2026-01-28 18:40:11.803599529 +0000 UTC m=+1622.630162380" watchObservedRunningTime="2026-01-28 18:40:11.821877145 +0000 UTC m=+1622.648439976" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.860371 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.798808904 podStartE2EDuration="8.860350202s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.310803323 +0000 UTC m=+1616.137366144" lastFinishedPulling="2026-01-28 18:40:10.372344621 +0000 UTC m=+1621.198907442" observedRunningTime="2026-01-28 18:40:11.852201861 +0000 UTC m=+1622.678764692" watchObservedRunningTime="2026-01-28 18:40:11.860350202 +0000 UTC m=+1622.686913033" Jan 28 18:40:11 crc kubenswrapper[4985]: I0128 18:40:11.877385 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.80435676 podStartE2EDuration="8.877367442s" podCreationTimestamp="2026-01-28 18:40:03 +0000 UTC" firstStartedPulling="2026-01-28 18:40:05.300027289 +0000 UTC m=+1616.126590110" lastFinishedPulling="2026-01-28 18:40:10.373037971 +0000 UTC m=+1621.199600792" observedRunningTime="2026-01-28 18:40:11.872088543 +0000 UTC m=+1622.698651364" watchObservedRunningTime="2026-01-28 18:40:11.877367442 +0000 UTC m=+1622.703930283" Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784457 4985 generic.go:334] "Generic (PLEG): container finished" podID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerID="8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" exitCode=0 Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784671 4985 generic.go:334] "Generic (PLEG): container finished" podID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerID="0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" exitCode=143 Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784904 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf"} Jan 28 18:40:12 crc kubenswrapper[4985]: I0128 18:40:12.784972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.069521 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154573 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154708 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.154864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") pod \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\" (UID: \"36263e10-c8a1-46f3-8fbd-b19bf25c48f5\") " Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.155136 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs" (OuterVolumeSpecName: "logs") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.155845 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.160305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf" (OuterVolumeSpecName: "kube-api-access-j9dlf") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "kube-api-access-j9dlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.192225 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data" (OuterVolumeSpecName: "config-data") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.209416 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "36263e10-c8a1-46f3-8fbd-b19bf25c48f5" (UID: "36263e10-c8a1-46f3-8fbd-b19bf25c48f5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258317 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258371 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9dlf\" (UniqueName: \"kubernetes.io/projected/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-kube-api-access-j9dlf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.258386 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36263e10-c8a1-46f3-8fbd-b19bf25c48f5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.800320 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"36263e10-c8a1-46f3-8fbd-b19bf25c48f5","Type":"ContainerDied","Data":"313ea23ef8841271d1f96b426e8d01778c710d0948d5bd636293d482289c28dd"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.800633 4985 scope.go:117] "RemoveContainer" containerID="8d0763045498cfbdfcd6eb66b00853b414f7ffcc4766f6c81a50d949aa924daf" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.801992 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.803014 4985 generic.go:334] "Generic (PLEG): container finished" podID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerID="c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce" exitCode=0 Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.803043 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerDied","Data":"c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce"} Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.842539 4985 scope.go:117] "RemoveContainer" containerID="0c746f04d229134099964148a5ac730c73c4e2d018cadac04c5153a47fe141b2" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.863908 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.880544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896065 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896663 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896685 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896693 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896701 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896710 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896716 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: E0128 18:40:13.896736 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896742 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896943 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" containerName="mariadb-account-create-update" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896962 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" containerName="mariadb-database-create" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896983 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-log" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.896992 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" containerName="nova-metadata-metadata" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.898319 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.901138 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.902914 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.933528 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.956365 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.957905 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.961149 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.965537 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.966075 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.966198 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976279 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976317 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976418 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976441 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.976669 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:13 crc kubenswrapper[4985]: I0128 18:40:13.998874 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.047577 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.047623 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079759 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.079921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080002 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080067 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080210 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.080660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.086901 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.087947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.088750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090553 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090594 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.090710 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.102231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"aodh-db-sync-hgpsv\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.104834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"nova-metadata-0\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.119060 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.138960 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.219778 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.278768 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.840410 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:14 crc kubenswrapper[4985]: W0128 18:40:14.863727 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3a6db7_1b8e_47a8_8c09_9f13fa2823a2.slice/crio-a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79 WatchSource:0}: Error finding container a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79: Status 404 returned error can't find the container with id a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79 Jan 28 18:40:14 crc kubenswrapper[4985]: I0128 18:40:14.907056 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.057069 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.132015 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.132043 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.240:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.283062 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36263e10-c8a1-46f3-8fbd-b19bf25c48f5" path="/var/lib/kubelet/pods/36263e10-c8a1-46f3-8fbd-b19bf25c48f5/volumes" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.508394 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637337 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637717 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.637787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.638194 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") pod \"14e43739-91f4-43c9-9b01-5f0574a3b150\" (UID: \"14e43739-91f4-43c9-9b01-5f0574a3b150\") " Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.644053 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts" (OuterVolumeSpecName: "scripts") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.644560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p" (OuterVolumeSpecName: "kube-api-access-gh45p") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "kube-api-access-gh45p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.652758 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.652794 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gh45p\" (UniqueName: \"kubernetes.io/projected/14e43739-91f4-43c9-9b01-5f0574a3b150-kube-api-access-gh45p\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.674602 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.682477 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data" (OuterVolumeSpecName: "config-data") pod "14e43739-91f4-43c9-9b01-5f0574a3b150" (UID: "14e43739-91f4-43c9-9b01-5f0574a3b150"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.755578 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.755613 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14e43739-91f4-43c9-9b01-5f0574a3b150-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860271 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860315 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.860326 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerStarted","Data":"a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.862400 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerStarted","Data":"72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872545 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-m82mm" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872573 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-m82mm" event={"ID":"14e43739-91f4-43c9-9b01-5f0574a3b150","Type":"ContainerDied","Data":"c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e"} Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.872654 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c42ea52d09811fa700e48475032c542d4742677726b27e37a6c29c19e54b460e" Jan 28 18:40:15 crc kubenswrapper[4985]: I0128 18:40:15.900505 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.900483994 podStartE2EDuration="2.900483994s" podCreationTimestamp="2026-01-28 18:40:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:15.883862755 +0000 UTC m=+1626.710425576" watchObservedRunningTime="2026-01-28 18:40:15.900483994 +0000 UTC m=+1626.727046815" Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.010541 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.010809 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" containerID="cri-o://8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" gracePeriod=30 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.011518 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" containerID="cri-o://7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" gracePeriod=30 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.026304 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.077832 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885614 4985 generic.go:334] "Generic (PLEG): container finished" podID="9094cf8a-0196-4d57-9b52-c433eece1088" containerID="8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" exitCode=143 Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885674 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41"} Jan 28 18:40:16 crc kubenswrapper[4985]: I0128 18:40:16.885772 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" containerID="cri-o://4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" gracePeriod=30 Jan 28 18:40:17 crc kubenswrapper[4985]: I0128 18:40:17.897601 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" containerID="cri-o://ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" gracePeriod=30 Jan 28 18:40:17 crc kubenswrapper[4985]: I0128 18:40:17.897689 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" containerID="cri-o://d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" gracePeriod=30 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924506 4985 generic.go:334] "Generic (PLEG): container finished" podID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerID="d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" exitCode=0 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924548 4985 generic.go:334] "Generic (PLEG): container finished" podID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerID="ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" exitCode=143 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924539 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b"} Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.924600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c"} Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.927754 4985 generic.go:334] "Generic (PLEG): container finished" podID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" exitCode=0 Jan 28 18:40:18 crc kubenswrapper[4985]: I0128 18:40:18.927801 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerDied","Data":"4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff"} Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.090753 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091135 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091555 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:40:19 crc kubenswrapper[4985]: E0128 18:40:19.091619 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.220659 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.220704 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.434460 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.508619 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.508846 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" containerID="cri-o://1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" gracePeriod=10 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.944191 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerID="1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" exitCode=0 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.944333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71"} Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.947700 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerID="5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178" exitCode=0 Jan 28 18:40:19 crc kubenswrapper[4985]: I0128 18:40:19.947743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerDied","Data":"5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178"} Jan 28 18:40:20 crc kubenswrapper[4985]: I0128 18:40:20.971958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" event={"ID":"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b","Type":"ContainerDied","Data":"124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106"} Jan 28 18:40:20 crc kubenswrapper[4985]: I0128 18:40:20.972283 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="124e40d06c3bc6dec66768ab9299f6ec41b3437c9591832dd7f81dc8a3da2106" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.055592 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.074130 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.099831 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100816 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.100989 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101231 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101288 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") pod \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\" (UID: \"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.101518 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") pod \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\" (UID: \"1e4282fb-bc3c-4444-82f9-350d2d3b7b0b\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.106949 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs" (OuterVolumeSpecName: "logs") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.120526 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv" (OuterVolumeSpecName: "kube-api-access-xfhgv") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "kube-api-access-xfhgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.122991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87" (OuterVolumeSpecName: "kube-api-access-mjq87") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "kube-api-access-mjq87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205072 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205142 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") pod \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\" (UID: \"0b5f547e-c916-40cd-8f40-5fc2b482a4f4\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205860 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xfhgv\" (UniqueName: \"kubernetes.io/projected/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-kube-api-access-xfhgv\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205876 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.205886 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjq87\" (UniqueName: \"kubernetes.io/projected/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-kube-api-access-mjq87\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.210872 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58" (OuterVolumeSpecName: "kube-api-access-wcg58") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "kube-api-access-wcg58". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.237578 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.259996 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data" (OuterVolumeSpecName: "config-data") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.260440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config" (OuterVolumeSpecName: "config") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.290400 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.302433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" (UID: "bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322907 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322948 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322965 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322979 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcg58\" (UniqueName: \"kubernetes.io/projected/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-kube-api-access-wcg58\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322989 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.322998 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.324569 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.334092 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data" (OuterVolumeSpecName: "config-data") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.338774 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b5f547e-c916-40cd-8f40-5fc2b482a4f4" (UID: "0b5f547e-c916-40cd-8f40-5fc2b482a4f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.339776 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.344237 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" (UID: "1e4282fb-bc3c-4444-82f9-350d2d3b7b0b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.425795 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426030 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426096 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426150 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b5f547e-c916-40cd-8f40-5fc2b482a4f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.426202 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.686441 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.834936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835166 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.835479 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") pod \"dc545ce7-58a7-4757-8eab-8b0a28570a49\" (UID: \"dc545ce7-58a7-4757-8eab-8b0a28570a49\") " Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.839192 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q" (OuterVolumeSpecName: "kube-api-access-z8d9q") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "kube-api-access-z8d9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.839608 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts" (OuterVolumeSpecName: "scripts") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.866394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data" (OuterVolumeSpecName: "config-data") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.887600 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc545ce7-58a7-4757-8eab-8b0a28570a49" (UID: "dc545ce7-58a7-4757-8eab-8b0a28570a49"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938422 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938452 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938463 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dc545ce7-58a7-4757-8eab-8b0a28570a49-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.938471 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8d9q\" (UniqueName: \"kubernetes.io/projected/dc545ce7-58a7-4757-8eab-8b0a28570a49-kube-api-access-z8d9q\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.990881 4985 generic.go:334] "Generic (PLEG): container finished" podID="9094cf8a-0196-4d57-9b52-c433eece1088" containerID="7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" exitCode=0 Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.990954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" event={"ID":"dc545ce7-58a7-4757-8eab-8b0a28570a49","Type":"ContainerDied","Data":"bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994692 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bcdb112668eaf0b473e2a3decc00678922c53936b28b53ec4075246a540a99e9" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.994762 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-rxz6k" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.997818 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"0b5f547e-c916-40cd-8f40-5fc2b482a4f4","Type":"ContainerDied","Data":"d67f49419ddc18736265dbf8231bcf89cd6ee9def418fabf88a409ff0a470ae3"} Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.997879 4985 scope.go:117] "RemoveContainer" containerID="4f654988f4be09060f89b6257b718b9b3ded1a0d262bce0bb06e2698263f9dff" Jan 28 18:40:21 crc kubenswrapper[4985]: I0128 18:40:21.998070 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.043422 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2","Type":"ContainerDied","Data":"a80a07711bbf1c6b8d51282102d24275ccb61762be00070c86f2aac16e172c79"} Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.043565 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.049747 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.062544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.067854 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688b9f5b49-v8wbr" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.068611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerStarted","Data":"6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6"} Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.086407 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087072 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="init" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087090 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="init" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087120 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087128 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087160 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087169 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087191 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087198 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087214 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087244 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087272 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: E0128 18:40:22.087289 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087296 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087630 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-log" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087649 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" containerName="dnsmasq-dns" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087670 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" containerName="nova-manage" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087684 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" containerName="nova-metadata-metadata" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087704 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" containerName="nova-cell1-conductor-db-sync" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.087713 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" containerName="nova-scheduler-scheduler" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.088794 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.088818 4985 scope.go:117] "RemoveContainer" containerID="d832775edbe5a8b07f83ebad75fca90b209d8e1af6fb02d629166107777f9d7b" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.092244 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.105081 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.115618 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.118936 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.142216 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.149090 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.149483 4985 scope.go:117] "RemoveContainer" containerID="ecc106116f755f35ae88a484ef050965a1fa42c237890edc93409ff54bd6245c" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.161261 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.178129 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.201124 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.202451 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-hgpsv" podStartSLOduration=3.520269022 podStartE2EDuration="9.202436173s" podCreationTimestamp="2026-01-28 18:40:13 +0000 UTC" firstStartedPulling="2026-01-28 18:40:15.059653975 +0000 UTC m=+1625.886216796" lastFinishedPulling="2026-01-28 18:40:20.741821126 +0000 UTC m=+1631.568383947" observedRunningTime="2026-01-28 18:40:22.11486642 +0000 UTC m=+1632.941429241" watchObservedRunningTime="2026-01-28 18:40:22.202436173 +0000 UTC m=+1633.028998994" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.204749 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.209516 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.209765 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.237782 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250795 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.250988 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.251020 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.251053 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.259346 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.270420 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688b9f5b49-v8wbr"] Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.335625 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363781 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363873 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.363944 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364130 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364182 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364413 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364445 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.364704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.370350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.370422 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.404918 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bbb020dd-95f1-4d78-9899-9fd0eca60584-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.407036 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fm6rg\" (UniqueName: \"kubernetes.io/projected/bbb020dd-95f1-4d78-9899-9fd0eca60584-kube-api-access-fm6rg\") pod \"nova-cell1-conductor-0\" (UID: \"bbb020dd-95f1-4d78-9899-9fd0eca60584\") " pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.420821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.427441 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.428644 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"nova-scheduler-0\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.458706 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466018 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466180 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466468 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") pod \"9094cf8a-0196-4d57-9b52-c433eece1088\" (UID: \"9094cf8a-0196-4d57-9b52-c433eece1088\") " Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.466889 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467022 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467049 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467095 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.467122 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.468186 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs" (OuterVolumeSpecName: "logs") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.469225 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.474393 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.476345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.479869 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb" (OuterVolumeSpecName: "kube-api-access-jvpbb") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "kube-api-access-jvpbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.486595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.489723 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"nova-metadata-0\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.523241 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data" (OuterVolumeSpecName: "config-data") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.523647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9094cf8a-0196-4d57-9b52-c433eece1088" (UID: "9094cf8a-0196-4d57-9b52-c433eece1088"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.541120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575543 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575607 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9094cf8a-0196-4d57-9b52-c433eece1088-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575625 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvpbb\" (UniqueName: \"kubernetes.io/projected/9094cf8a-0196-4d57-9b52-c433eece1088-kube-api-access-jvpbb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:22 crc kubenswrapper[4985]: I0128 18:40:22.575635 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9094cf8a-0196-4d57-9b52-c433eece1088-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.000028 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9094cf8a-0196-4d57-9b52-c433eece1088","Type":"ContainerDied","Data":"6ab1f97ac874b54ef01c0179a3153dd1ba3d40d00482df2197af30281a5558ed"} Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090821 4985 scope.go:117] "RemoveContainer" containerID="7fe261f234dfcdbd654880575e2bca2d56695d9b2729b345e61ed3908aa5d15b" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.090486 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: W0128 18:40:23.097900 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod938ef95c_9a4f_4f1e_b92c_8c16f0043102.slice/crio-8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156 WatchSource:0}: Error finding container 8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156: Status 404 returned error can't find the container with id 8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156 Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.103998 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.106488 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bbb020dd-95f1-4d78-9899-9fd0eca60584","Type":"ContainerStarted","Data":"9cbc86b78469c4374a4f308e99f249b09f17f57a89721e7f8fdda83780cf8762"} Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.190983 4985 scope.go:117] "RemoveContainer" containerID="8f25a54a639d5802a6dfdddf74cdc99effc77725c8b5d2df0e96ef7e74916b41" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.246627 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.260496 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.292594 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b5f547e-c916-40cd-8f40-5fc2b482a4f4" path="/var/lib/kubelet/pods/0b5f547e-c916-40cd-8f40-5fc2b482a4f4/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.293469 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e4282fb-bc3c-4444-82f9-350d2d3b7b0b" path="/var/lib/kubelet/pods/1e4282fb-bc3c-4444-82f9-350d2d3b7b0b/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.294410 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" path="/var/lib/kubelet/pods/9094cf8a-0196-4d57-9b52-c433eece1088/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.299460 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2" path="/var/lib/kubelet/pods/bb3a6db7-1b8e-47a8-8c09-9f13fa2823a2/volumes" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.300452 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: E0128 18:40:23.301641 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.301731 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: E0128 18:40:23.301812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.301875 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.302279 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-log" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.302419 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9094cf8a-0196-4d57-9b52-c433eece1088" containerName="nova-api-api" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.303975 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.304148 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.312358 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.313622 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.411990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412061 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412105 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.412132 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.514927 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.515479 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.515640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.516519 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.516563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.520374 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.520591 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.540411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"nova-api-0\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " pod="openstack/nova-api-0" Jan 28 18:40:23 crc kubenswrapper[4985]: I0128 18:40:23.632341 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129361 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129668 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.129678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerStarted","Data":"beb681875d1b031fab542c0f8d59f502b25e7da8eb5f0f02c317251a2c3309d0"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.144820 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.145203 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerStarted","Data":"047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.145273 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerStarted","Data":"8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156"} Jan 28 18:40:24 crc kubenswrapper[4985]: W0128 18:40:24.159842 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72cdf54b_14dd_4844_bb8c_b68794fba1b9.slice/crio-afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47 WatchSource:0}: Error finding container afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47: Status 404 returned error can't find the container with id afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47 Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.159896 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"bbb020dd-95f1-4d78-9899-9fd0eca60584","Type":"ContainerStarted","Data":"dc8c534822edfe9eb8afcfdb5fd500622fdb8c6873115d966342dd4d21ddfd06"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.160618 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.161917 4985 generic.go:334] "Generic (PLEG): container finished" podID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerID="6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6" exitCode=0 Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.161964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerDied","Data":"6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6"} Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.170740 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.170716631 podStartE2EDuration="2.170716631s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.152349082 +0000 UTC m=+1634.978911903" watchObservedRunningTime="2026-01-28 18:40:24.170716631 +0000 UTC m=+1634.997279452" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.189882 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.189859762 podStartE2EDuration="2.189859762s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.17279601 +0000 UTC m=+1634.999358831" watchObservedRunningTime="2026-01-28 18:40:24.189859762 +0000 UTC m=+1635.016422583" Jan 28 18:40:24 crc kubenswrapper[4985]: I0128 18:40:24.210027 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.2100054 podStartE2EDuration="2.2100054s" podCreationTimestamp="2026-01-28 18:40:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:24.202508429 +0000 UTC m=+1635.029071250" watchObservedRunningTime="2026-01-28 18:40:24.2100054 +0000 UTC m=+1635.036568221" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.179701 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.182096 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.182128 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerStarted","Data":"afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47"} Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.216119 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.216051804 podStartE2EDuration="2.216051804s" podCreationTimestamp="2026-01-28 18:40:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:25.208676625 +0000 UTC m=+1636.035239446" watchObservedRunningTime="2026-01-28 18:40:25.216051804 +0000 UTC m=+1636.042614625" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.263802 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:25 crc kubenswrapper[4985]: E0128 18:40:25.264104 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.666703 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783209 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.783899 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") pod \"7decce21-e84c-4501-bf0d-ca01387c51ee\" (UID: \"7decce21-e84c-4501-bf0d-ca01387c51ee\") " Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.789725 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz" (OuterVolumeSpecName: "kube-api-access-wsqcz") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "kube-api-access-wsqcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.795933 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts" (OuterVolumeSpecName: "scripts") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.820758 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.833007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data" (OuterVolumeSpecName: "config-data") pod "7decce21-e84c-4501-bf0d-ca01387c51ee" (UID: "7decce21-e84c-4501-bf0d-ca01387c51ee"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886773 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886816 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsqcz\" (UniqueName: \"kubernetes.io/projected/7decce21-e84c-4501-bf0d-ca01387c51ee-kube-api-access-wsqcz\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886833 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:25 crc kubenswrapper[4985]: I0128 18:40:25.886845 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7decce21-e84c-4501-bf0d-ca01387c51ee-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194648 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-hgpsv" event={"ID":"7decce21-e84c-4501-bf0d-ca01387c51ee","Type":"ContainerDied","Data":"72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f"} Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194705 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a3d23c9a572bc420fc7e3eb89dda8941d63c42b0d6a69ff809fa9dea983c2f" Jan 28 18:40:26 crc kubenswrapper[4985]: I0128 18:40:26.194720 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-hgpsv" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.459444 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.542449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:27 crc kubenswrapper[4985]: I0128 18:40:27.542507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:40:28 crc kubenswrapper[4985]: I0128 18:40:28.076191 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.190604 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:29 crc kubenswrapper[4985]: E0128 18:40:29.192197 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.192227 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.192842 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" containerName="aodh-db-sync" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.216954 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223030 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223099 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.223528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.260939 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265358 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265582 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.265805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.368058 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.373743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.374761 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.374818 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.375164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.382982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.398569 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.406953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"aodh-0\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " pod="openstack/aodh-0" Jan 28 18:40:29 crc kubenswrapper[4985]: I0128 18:40:29.542399 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:40:30 crc kubenswrapper[4985]: I0128 18:40:30.295315 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:31 crc kubenswrapper[4985]: I0128 18:40:31.283615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea"} Jan 28 18:40:31 crc kubenswrapper[4985]: I0128 18:40:31.284199 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"0e67457eae33c25cf3a4581aecdd202fe5ea7cb4f78ba1758d22e2ed33abfd6b"} Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.459227 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.467323 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.511061 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.542673 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.542725 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.933789 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934212 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" containerID="cri-o://5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934597 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" containerID="cri-o://b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934686 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" containerID="cri-o://e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" gracePeriod=30 Jan 28 18:40:32 crc kubenswrapper[4985]: I0128 18:40:32.934711 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" containerID="cri-o://d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" gracePeriod=30 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.308824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312073 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" exitCode=0 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312106 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" exitCode=2 Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312139 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.312198 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b"} Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.380806 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.593459 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.593504 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.250:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.633173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:33 crc kubenswrapper[4985]: I0128 18:40:33.633231 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.325922 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" exitCode=0 Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.326199 4985 generic.go:334] "Generic (PLEG): container finished" podID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerID="5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" exitCode=0 Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.325998 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa"} Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.326240 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a"} Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.722593 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:34 crc kubenswrapper[4985]: I0128 18:40:34.722615 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.251:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.760838 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.849884 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.849972 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850352 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850401 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850435 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850501 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") pod \"4bf14558-3072-45a9-bf6c-66d42c26bb42\" (UID: \"4bf14558-3072-45a9-bf6c-66d42c26bb42\") " Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.850516 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.851057 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.852084 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.852112 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/4bf14558-3072-45a9-bf6c-66d42c26bb42-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.867054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm" (OuterVolumeSpecName: "kube-api-access-gkjmm") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "kube-api-access-gkjmm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.869419 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts" (OuterVolumeSpecName: "scripts") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.922581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954358 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gkjmm\" (UniqueName: \"kubernetes.io/projected/4bf14558-3072-45a9-bf6c-66d42c26bb42-kube-api-access-gkjmm\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954398 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.954409 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:35 crc kubenswrapper[4985]: I0128 18:40:35.992154 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.018336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data" (OuterVolumeSpecName: "config-data") pod "4bf14558-3072-45a9-bf6c-66d42c26bb42" (UID: "4bf14558-3072-45a9-bf6c-66d42c26bb42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.056656 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.056689 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4bf14558-3072-45a9-bf6c-66d42c26bb42-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"4bf14558-3072-45a9-bf6c-66d42c26bb42","Type":"ContainerDied","Data":"cda0d3d7eb455e4b9ead99374175951ce213d2d28aa9402eeb2c7090c5991dcb"} Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371670 4985 scope.go:117] "RemoveContainer" containerID="b9e54c9390ac19ce9b01014af01e84d06209440198802b57b8ed1093cd72b389" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.371848 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.438296 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.464112 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.484749 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485518 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485530 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485538 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485571 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: E0128 18:40:36.485610 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485620 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485900 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-notification-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485929 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="proxy-httpd" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485950 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="sg-core" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.485981 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" containerName="ceilometer-central-agent" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.489554 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.495820 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.504942 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.512956 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.542307 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674375 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674477 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674519 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674576 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.674835 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.745847 4985 scope.go:117] "RemoveContainer" containerID="d31d4e4526cabd5446579b90e6e8ebe04239de7add61e7534b84bdc949e7941b" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776804 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776919 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776962 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.776991 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777010 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777037 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.777066 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.780743 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.780913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.788607 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.794517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.802482 4985 scope.go:117] "RemoveContainer" containerID="e830fa21da31aadc107ffb13c5dbc7439288531948ea73e3c3675b37b51f9caa" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.803338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.803542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.808330 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"ceilometer-0\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.824049 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.845145 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:36 crc kubenswrapper[4985]: I0128 18:40:36.845368 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" containerID="cri-o://926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" gracePeriod=30 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.009216 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.009687 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/mysqld-exporter-0" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" containerID="cri-o://fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" gracePeriod=30 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.191188 4985 scope.go:117] "RemoveContainer" containerID="5843e8333b06785c57f83f1e4a0e1c4f7b7edb61800eb50282cf92c2c7396e5a" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.280664 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bf14558-3072-45a9-bf6c-66d42c26bb42" path="/var/lib/kubelet/pods/4bf14558-3072-45a9-bf6c-66d42c26bb42/volumes" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.388760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.391098 4985 generic.go:334] "Generic (PLEG): container finished" podID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerID="926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" exitCode=2 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.391181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerDied","Data":"926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.398334 4985 generic.go:334] "Generic (PLEG): container finished" podID="558a195a-5deb-441a-9eeb-9e506f49597e" containerID="fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" exitCode=2 Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.398387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerDied","Data":"fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296"} Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.555290 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.691016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.699707 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") pod \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\" (UID: \"b4b8dd73-ff4d-44d3-b30f-a994e993392d\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.706935 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6" (OuterVolumeSpecName: "kube-api-access-45mg6") pod "b4b8dd73-ff4d-44d3-b30f-a994e993392d" (UID: "b4b8dd73-ff4d-44d3-b30f-a994e993392d"). InnerVolumeSpecName "kube-api-access-45mg6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.804016 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-45mg6\" (UniqueName: \"kubernetes.io/projected/b4b8dd73-ff4d-44d3-b30f-a994e993392d-kube-api-access-45mg6\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.815859 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905748 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905825 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.905882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") pod \"558a195a-5deb-441a-9eeb-9e506f49597e\" (UID: \"558a195a-5deb-441a-9eeb-9e506f49597e\") " Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.922454 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf" (OuterVolumeSpecName: "kube-api-access-q8sjf") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "kube-api-access-q8sjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.933907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:37 crc kubenswrapper[4985]: I0128 18:40:37.980510 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data" (OuterVolumeSpecName: "config-data") pod "558a195a-5deb-441a-9eeb-9e506f49597e" (UID: "558a195a-5deb-441a-9eeb-9e506f49597e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009360 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009395 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8sjf\" (UniqueName: \"kubernetes.io/projected/558a195a-5deb-441a-9eeb-9e506f49597e-kube-api-access-q8sjf\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.009406 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/558a195a-5deb-441a-9eeb-9e506f49597e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.417918 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"b4b8dd73-ff4d-44d3-b30f-a994e993392d","Type":"ContainerDied","Data":"ec024b4a882b8b962648e5e1cddea01209414bd2598d2c9c73886bd738d4ea3d"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.417979 4985 scope.go:117] "RemoveContainer" containerID="926ee0d9744c84d616cdd1efef14930926916bccab52a9fc5bcb156c80c24d29" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.418180 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.425841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"558a195a-5deb-441a-9eeb-9e506f49597e","Type":"ContainerDied","Data":"85458b6f5d810a7b499082f7190c9ac8b481800a9c019fc526f3a7b1b018b583"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.425989 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.431992 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"ce00adc004811ac9876895749ff5243ac88f3112b42fc43a6710153984d18f01"} Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.492856 4985 scope.go:117] "RemoveContainer" containerID="fb245cebe475dc743941a7a591f70b9acf915655a7047e5c0f3798d225e1d296" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.518540 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.562169 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.590901 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: E0128 18:40:38.591554 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591580 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: E0128 18:40:38.591633 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591858 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" containerName="kube-state-metrics" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.591881 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" containerName="mysqld-exporter" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.592964 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.596098 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.596475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.617545 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.652761 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.679939 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.692163 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.694188 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.698858 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-mysqld-exporter-svc" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.698914 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"mysqld-exporter-config-data" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.714981 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.749677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.749744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.750014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.750094 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.852973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853287 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853379 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853520 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.853588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.858596 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.871138 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.871777 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.875995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbswb\" (UniqueName: \"kubernetes.io/projected/1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e-kube-api-access-gbswb\") pod \"kube-state-metrics-0\" (UID: \"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e\") " pod="openstack/kube-state-metrics-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956145 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956206 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956283 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.956362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.961338 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mysqld-exporter-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-mysqld-exporter-tls-certs\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.965046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-config-data\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.974703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b1f6dd4-6d66-4f40-879f-5f0af3845842-combined-ca-bundle\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.979345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r82rc\" (UniqueName: \"kubernetes.io/projected/6b1f6dd4-6d66-4f40-879f-5f0af3845842-kube-api-access-r82rc\") pod \"mysqld-exporter-0\" (UID: \"6b1f6dd4-6d66-4f40-879f-5f0af3845842\") " pod="openstack/mysqld-exporter-0" Jan 28 18:40:38 crc kubenswrapper[4985]: I0128 18:40:38.996751 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.014511 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/mysqld-exporter-0" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.156833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.264356 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:39 crc kubenswrapper[4985]: E0128 18:40:39.264599 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.283725 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="558a195a-5deb-441a-9eeb-9e506f49597e" path="/var/lib/kubelet/pods/558a195a-5deb-441a-9eeb-9e506f49597e/volumes" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.327315 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4b8dd73-ff4d-44d3-b30f-a994e993392d" path="/var/lib/kubelet/pods/b4b8dd73-ff4d-44d3-b30f-a994e993392d/volumes" Jan 28 18:40:39 crc kubenswrapper[4985]: I0128 18:40:39.452930 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2"} Jan 28 18:40:40 crc kubenswrapper[4985]: W0128 18:40:40.092306 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1e6eb1bd_1379_4be2_bcb0_6d7a37e93e9e.slice/crio-af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f WatchSource:0}: Error finding container af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f: Status 404 returned error can't find the container with id af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f Jan 28 18:40:40 crc kubenswrapper[4985]: W0128 18:40:40.096434 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b1f6dd4_6d66_4f40_879f_5f0af3845842.slice/crio-e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08 WatchSource:0}: Error finding container e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08: Status 404 returned error can't find the container with id e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08 Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.113524 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/mysqld-exporter-0"] Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.126881 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.473433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"af5ba1e93278410187fd69c8fa837aeaecc5cffabce8a2786e1f6dcdecdc625f"} Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.481673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6b1f6dd4-6d66-4f40-879f-5f0af3845842","Type":"ContainerStarted","Data":"e7a193405b4304741a718f4d37c1ff7fe232fa8f41840fee8539d24d7a9c9e08"} Jan 28 18:40:40 crc kubenswrapper[4985]: I0128 18:40:40.497592 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd"} Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510579 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerStarted","Data":"116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55"} Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510714 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" containerID="cri-o://cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510773 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" containerID="cri-o://116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510812 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" containerID="cri-o://5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.510837 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" containerID="cri-o://45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" gracePeriod=30 Jan 28 18:40:41 crc kubenswrapper[4985]: I0128 18:40:41.533300 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.233797952 podStartE2EDuration="12.533281057s" podCreationTimestamp="2026-01-28 18:40:29 +0000 UTC" firstStartedPulling="2026-01-28 18:40:30.290438566 +0000 UTC m=+1641.117001387" lastFinishedPulling="2026-01-28 18:40:39.589921671 +0000 UTC m=+1650.416484492" observedRunningTime="2026-01-28 18:40:41.532348761 +0000 UTC m=+1652.358911592" watchObservedRunningTime="2026-01-28 18:40:41.533281057 +0000 UTC m=+1652.359843878" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.534859 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" exitCode=0 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.535147 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" exitCode=0 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.534957 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.535280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.538696 4985 generic.go:334] "Generic (PLEG): container finished" podID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerID="8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" exitCode=137 Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.538741 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerDied","Data":"8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e"} Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.553850 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.558352 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.561561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.638997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.780992 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.781341 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.781680 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") pod \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\" (UID: \"adbc3193-99ed-4a75-848b-6b98dfef1d3a\") " Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.786896 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv" (OuterVolumeSpecName: "kube-api-access-vkmsv") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "kube-api-access-vkmsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.815488 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.816917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data" (OuterVolumeSpecName: "config-data") pod "adbc3193-99ed-4a75-848b-6b98dfef1d3a" (UID: "adbc3193-99ed-4a75-848b-6b98dfef1d3a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885055 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkmsv\" (UniqueName: \"kubernetes.io/projected/adbc3193-99ed-4a75-848b-6b98dfef1d3a-kube-api-access-vkmsv\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885086 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:42 crc kubenswrapper[4985]: I0128 18:40:42.885095 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/adbc3193-99ed-4a75-848b-6b98dfef1d3a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.552538 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" exitCode=0 Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.552613 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3"} Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.555825 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.556458 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"adbc3193-99ed-4a75-848b-6b98dfef1d3a","Type":"ContainerDied","Data":"d8cf9fb9c6cec17cb1a2721de6a0e35c45b968fbf964f4ce2fc3f3f714ea3e1d"} Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.556495 4985 scope.go:117] "RemoveContainer" containerID="8e55d982fad1ab9461d4987775a77b35c6b3f7d058a5f2ff32d12ef2930dd72e" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.573676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.583875 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.599565 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.616193 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: E0128 18:40:43.616803 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.616819 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.617088 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.618018 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.623806 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.624074 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.624346 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.640884 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.642150 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.656897 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.672380 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.673947 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815036 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815082 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815178 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815207 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.815306 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916906 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916925 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916954 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.916984 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.921387 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.921931 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.923880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.924628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4e0bd087-7446-45b4-858b-7b514713d4fe-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.936594 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blw9r\" (UniqueName: \"kubernetes.io/projected/4e0bd087-7446-45b4-858b-7b514713d4fe-kube-api-access-blw9r\") pod \"nova-cell1-novncproxy-0\" (UID: \"4e0bd087-7446-45b4-858b-7b514713d4fe\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:43 crc kubenswrapper[4985]: I0128 18:40:43.937189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.568718 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704"} Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.569089 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.573782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.758800 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.798223 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.800132 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.846903 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.944695 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945084 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945170 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945323 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:44 crc kubenswrapper[4985]: I0128 18:40:44.945389 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.047402 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048563 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048669 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048822 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.049030 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.048485 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050029 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050075 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.050750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.067633 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"dnsmasq-dns-f84f9ccf-mp4hr\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.155951 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.287607 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="adbc3193-99ed-4a75-848b-6b98dfef1d3a" path="/var/lib/kubelet/pods/adbc3193-99ed-4a75-848b-6b98dfef1d3a/volumes" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.581232 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e0bd087-7446-45b4-858b-7b514713d4fe","Type":"ContainerStarted","Data":"0ea31fa32ec22c0401b08dda3f024f7fef07811f5c62450a61dc039159d908ff"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.581480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"4e0bd087-7446-45b4-858b-7b514713d4fe","Type":"ContainerStarted","Data":"62f5d763e031e1fd03aa24e0cb0496eb67ec3549061d27a4e24005f40fdf07c0"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.587191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.587417 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.591997 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/mysqld-exporter-0" event={"ID":"6b1f6dd4-6d66-4f40-879f-5f0af3845842","Type":"ContainerStarted","Data":"38b3266549f39b090b2b6709a347b2040c589c8067c8e7ca7a4cc2de8aabc0c8"} Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.616386 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podUID="12d4e4cf-9153-4a32-9155-f9d13a248a26" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.630636 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.6306146139999997 podStartE2EDuration="2.630614614s" podCreationTimestamp="2026-01-28 18:40:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:45.601579854 +0000 UTC m=+1656.428142675" watchObservedRunningTime="2026-01-28 18:40:45.630614614 +0000 UTC m=+1656.457177445" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.632737 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=5.413223742 podStartE2EDuration="7.632726993s" podCreationTimestamp="2026-01-28 18:40:38 +0000 UTC" firstStartedPulling="2026-01-28 18:40:40.094934899 +0000 UTC m=+1650.921497720" lastFinishedPulling="2026-01-28 18:40:42.31443815 +0000 UTC m=+1653.141000971" observedRunningTime="2026-01-28 18:40:45.625764097 +0000 UTC m=+1656.452326938" watchObservedRunningTime="2026-01-28 18:40:45.632726993 +0000 UTC m=+1656.459289814" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.645970 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/mysqld-exporter-0" podStartSLOduration=3.920427538 podStartE2EDuration="7.645947777s" podCreationTimestamp="2026-01-28 18:40:38 +0000 UTC" firstStartedPulling="2026-01-28 18:40:40.101128454 +0000 UTC m=+1650.927691265" lastFinishedPulling="2026-01-28 18:40:43.826648683 +0000 UTC m=+1654.653211504" observedRunningTime="2026-01-28 18:40:45.643269141 +0000 UTC m=+1656.469831972" watchObservedRunningTime="2026-01-28 18:40:45.645947777 +0000 UTC m=+1656.472510608" Jan 28 18:40:45 crc kubenswrapper[4985]: I0128 18:40:45.737980 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:40:46 crc kubenswrapper[4985]: I0128 18:40:46.605855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerStarted","Data":"8a81f5a6bc9aeb4779fe5ba3167c9da81f9d6b2cee2d0a3316b0a2d07b8f7a9e"} Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.489202 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.630338 4985 generic.go:334] "Generic (PLEG): container finished" podID="f33e23a8-5c59-41b1-9afe-00977f966724" containerID="fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90" exitCode=0 Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.630614 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" containerID="cri-o://6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" gracePeriod=30 Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.632091 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90"} Jan 28 18:40:47 crc kubenswrapper[4985]: I0128 18:40:47.632697 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" containerID="cri-o://5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.647219 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerStarted","Data":"8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.647798 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651310 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerStarted","Data":"d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651441 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" containerID="cri-o://62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651469 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651476 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" containerID="cri-o://c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651480 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" containerID="cri-o://26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.651517 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" containerID="cri-o://d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" gracePeriod=30 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.655434 4985 generic.go:334] "Generic (PLEG): container finished" podID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerID="6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" exitCode=143 Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.655487 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54"} Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.673095 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podStartSLOduration=4.67307818 podStartE2EDuration="4.67307818s" podCreationTimestamp="2026-01-28 18:40:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:48.66669361 +0000 UTC m=+1659.493256441" watchObservedRunningTime="2026-01-28 18:40:48.67307818 +0000 UTC m=+1659.499641001" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.701102 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.815359995 podStartE2EDuration="12.701083211s" podCreationTimestamp="2026-01-28 18:40:36 +0000 UTC" firstStartedPulling="2026-01-28 18:40:37.698909684 +0000 UTC m=+1648.525472505" lastFinishedPulling="2026-01-28 18:40:47.5846329 +0000 UTC m=+1658.411195721" observedRunningTime="2026-01-28 18:40:48.693459345 +0000 UTC m=+1659.520022177" watchObservedRunningTime="2026-01-28 18:40:48.701083211 +0000 UTC m=+1659.527646032" Jan 28 18:40:48 crc kubenswrapper[4985]: I0128 18:40:48.937854 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676560 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676874 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" exitCode=2 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676620 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676925 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676939 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd"} Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676886 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.676960 4985 generic.go:334] "Generic (PLEG): container finished" podID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerID="62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" exitCode=0 Jan 28 18:40:49 crc kubenswrapper[4985]: I0128 18:40:49.677324 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2"} Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.264162 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:40:50 crc kubenswrapper[4985]: E0128 18:40:50.264567 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.658170 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693782 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8480417c-9ea7-4d07-bcbd-7734e301a0c6","Type":"ContainerDied","Data":"ce00adc004811ac9876895749ff5243ac88f3112b42fc43a6710153984d18f01"} Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693841 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.693865 4985 scope.go:117] "RemoveContainer" containerID="d31e92aa6b1d7376b4e96782143ab6de149e34e427162f5a9786c7802bc818a7" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.726996 4985 scope.go:117] "RemoveContainer" containerID="26219cb687355c4dac3bfd3a6d68d0e8525ff60342389f25724df8675c0e7704" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.750225 4985 scope.go:117] "RemoveContainer" containerID="c96c826eaeb96bb76e151ca4f0d78c7aedd46ac1aa31c55f5960d944997cc2fd" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.777048 4985 scope.go:117] "RemoveContainer" containerID="62c497ce8a32d9934318c17ed91d43a5f2b55f59dcf450233639cd2285d0f2a2" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.802928 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803111 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803316 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803432 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.803476 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") pod \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\" (UID: \"8480417c-9ea7-4d07-bcbd-7734e301a0c6\") " Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.805382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.806460 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.810645 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts" (OuterVolumeSpecName: "scripts") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.810985 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5" (OuterVolumeSpecName: "kube-api-access-gxpb5") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "kube-api-access-gxpb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.847704 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906577 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxpb5\" (UniqueName: \"kubernetes.io/projected/8480417c-9ea7-4d07-bcbd-7734e301a0c6-kube-api-access-gxpb5\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906619 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906634 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906647 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.906658 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8480417c-9ea7-4d07-bcbd-7734e301a0c6-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.907281 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:50 crc kubenswrapper[4985]: I0128 18:40:50.944875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data" (OuterVolumeSpecName: "config-data") pod "8480417c-9ea7-4d07-bcbd-7734e301a0c6" (UID: "8480417c-9ea7-4d07-bcbd-7734e301a0c6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.008506 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.008541 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8480417c-9ea7-4d07-bcbd-7734e301a0c6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.032794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.050567 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066069 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066659 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066683 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066703 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066711 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066724 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066731 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: E0128 18:40:51.066758 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.066765 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067057 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-notification-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067078 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="proxy-httpd" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067101 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="sg-core" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.067124 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" containerName="ceilometer-central-agent" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.069730 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.078852 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.080752 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.080932 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.103785 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213704 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213783 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213838 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.213892 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214015 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214149 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214306 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.214345 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.283871 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8480417c-9ea7-4d07-bcbd-7734e301a0c6" path="/var/lib/kubelet/pods/8480417c-9ea7-4d07-bcbd-7734e301a0c6/volumes" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.284792 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.287323 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.292156 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316529 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316671 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316712 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316820 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316871 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316907 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.316986 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.317609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.317875 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.322347 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.323077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.323948 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.325880 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.325895 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.361708 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"ceilometer-0\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419305 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.419637 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.460119 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.521938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.522675 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.522807 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.523350 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.523411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.546345 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"community-operators-sn5lq\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.614747 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.712870 4985 generic.go:334] "Generic (PLEG): container finished" podID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerID="5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" exitCode=0 Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.712922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d"} Jan 28 18:40:51 crc kubenswrapper[4985]: I0128 18:40:51.984204 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.185574 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251631 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251887 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.251958 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") pod \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\" (UID: \"72cdf54b-14dd-4844-bb8c-b68794fba1b9\") " Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.252450 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs" (OuterVolumeSpecName: "logs") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.252921 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/72cdf54b-14dd-4844-bb8c-b68794fba1b9-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.257647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7" (OuterVolumeSpecName: "kube-api-access-clxv7") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "kube-api-access-clxv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.296926 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.304868 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data" (OuterVolumeSpecName: "config-data") pod "72cdf54b-14dd-4844-bb8c-b68794fba1b9" (UID: "72cdf54b-14dd-4844-bb8c-b68794fba1b9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:40:52 crc kubenswrapper[4985]: W0128 18:40:52.318910 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfe3dd10e_5081_4256_9c08_e2be3557bf65.slice/crio-1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd WatchSource:0}: Error finding container 1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd: Status 404 returned error can't find the container with id 1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.351330 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356700 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356734 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/72cdf54b-14dd-4844-bb8c-b68794fba1b9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.356744 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clxv7\" (UniqueName: \"kubernetes.io/projected/72cdf54b-14dd-4844-bb8c-b68794fba1b9-kube-api-access-clxv7\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.729671 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.729653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"72cdf54b-14dd-4844-bb8c-b68794fba1b9","Type":"ContainerDied","Data":"afeb7e343ebc16ce5060f2783d896f767c20813419a24762ce1683493a801f47"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.730163 4985 scope.go:117] "RemoveContainer" containerID="5ddbcefbcd9d03f983d9329ae2dee80e9b1046c773fa3fc54838926cf067667d" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.733447 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"99f6a59231cb74972d7065e16a91981feb750820d3a47ac21d46c1a8419a7fb5"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735698 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" exitCode=0 Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.735751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd"} Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.760196 4985 scope.go:117] "RemoveContainer" containerID="6400694cb09a2eb35a99c8f2620bc42af5a434bb4e4c9f3a4165d20445332e54" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.815424 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.836231 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.850450 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: E0128 18:40:52.851089 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851113 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: E0128 18:40:52.851168 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851177 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851501 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-log" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.851534 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" containerName="nova-api-api" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.853210 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.862475 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.864509 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.865599 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.864581 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972056 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972300 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972333 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972378 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:52 crc kubenswrapper[4985]: I0128 18:40:52.972427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074568 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074601 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074650 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074707 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.074868 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.075421 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.080793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.084703 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.084926 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.085590 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.094561 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"nova-api-0\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.286440 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72cdf54b-14dd-4844-bb8c-b68794fba1b9" path="/var/lib/kubelet/pods/72cdf54b-14dd-4844-bb8c-b68794fba1b9/volumes" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.307483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.750542 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0"} Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.773039 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.937676 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:53 crc kubenswrapper[4985]: I0128 18:40:53.958727 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.773383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc"} Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.773785 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"c90565f788cfb36cdadf74a3373459a040e9f918b36e0c76ca75c9290bca74e9"} Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.793602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.980307 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.982302 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.984832 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.984885 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 18:40:54 crc kubenswrapper[4985]: I0128 18:40:54.993731 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030519 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030646 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.030850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.031076 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133433 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133544 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133615 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.133799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.138960 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.139988 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.140896 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.154200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"nova-cell1-cell-mapping-559zx\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.158416 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.286968 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.287188 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" containerID="cri-o://4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" gracePeriod=10 Jan 28 18:40:55 crc kubenswrapper[4985]: I0128 18:40:55.347877 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.796525 4985 generic.go:334] "Generic (PLEG): container finished" podID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerID="4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" exitCode=0 Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.796626 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9"} Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.799123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerStarted","Data":"091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade"} Jan 28 18:40:56 crc kubenswrapper[4985]: I0128 18:40:56.829908 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=4.829883755 podStartE2EDuration="4.829883755s" podCreationTimestamp="2026-01-28 18:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:40:56.824449521 +0000 UTC m=+1667.651012352" watchObservedRunningTime="2026-01-28 18:40:56.829883755 +0000 UTC m=+1667.656446586" Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.815401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" event={"ID":"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0","Type":"ContainerDied","Data":"b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1"} Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.815989 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b12e09f6a40d1423b050a43aba39f7da27aac982d0fc418cb95ef0f8e230e6e1" Jan 28 18:40:57 crc kubenswrapper[4985]: I0128 18:40:57.916596 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010198 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010281 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010326 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010586 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010666 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.010748 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") pod \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\" (UID: \"a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0\") " Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.016087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m" (OuterVolumeSpecName: "kube-api-access-d694m") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "kube-api-access-d694m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.076309 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.087097 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.088511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.103143 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config" (OuterVolumeSpecName: "config") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.110478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" (UID: "a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114434 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114606 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114677 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114740 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114839 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.114963 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d694m\" (UniqueName: \"kubernetes.io/projected/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0-kube-api-access-d694m\") on node \"crc\" DevicePath \"\"" Jan 28 18:40:58 crc kubenswrapper[4985]: I0128 18:40:58.825625 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-568d7fd7cf-hjzhw" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.015910 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.022714 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.035512 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-568d7fd7cf-hjzhw"] Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.277458 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" path="/var/lib/kubelet/pods/a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0/volumes" Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.332738 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:40:59 crc kubenswrapper[4985]: W0128 18:40:59.334595 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaabefa44_123b_48ce_a38b_8c5f6ed32b73.slice/crio-870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067 WatchSource:0}: Error finding container 870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067: Status 404 returned error can't find the container with id 870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067 Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.841570 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerStarted","Data":"870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067"} Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.846002 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0"} Jan 28 18:40:59 crc kubenswrapper[4985]: I0128 18:40:59.849611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} Jan 28 18:41:00 crc kubenswrapper[4985]: I0128 18:41:00.861121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerStarted","Data":"db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa"} Jan 28 18:41:00 crc kubenswrapper[4985]: I0128 18:41:00.885589 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-559zx" podStartSLOduration=6.885565426 podStartE2EDuration="6.885565426s" podCreationTimestamp="2026-01-28 18:40:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:00.881951084 +0000 UTC m=+1671.708513925" watchObservedRunningTime="2026-01-28 18:41:00.885565426 +0000 UTC m=+1671.712128287" Jan 28 18:41:02 crc kubenswrapper[4985]: I0128 18:41:02.893083 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba"} Jan 28 18:41:03 crc kubenswrapper[4985]: I0128 18:41:03.308518 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:03 crc kubenswrapper[4985]: I0128 18:41:03.308584 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.264958 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:04 crc kubenswrapper[4985]: E0128 18:41:04.265709 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.322471 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.322471 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.4:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.924179 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" exitCode=0 Jan 28 18:41:04 crc kubenswrapper[4985]: I0128 18:41:04.924238 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} Jan 28 18:41:05 crc kubenswrapper[4985]: I0128 18:41:05.944003 4985 generic.go:334] "Generic (PLEG): container finished" podID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerID="db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa" exitCode=0 Jan 28 18:41:05 crc kubenswrapper[4985]: I0128 18:41:05.944358 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerDied","Data":"db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa"} Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.961925 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerStarted","Data":"fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236"} Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.963455 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:41:06 crc kubenswrapper[4985]: I0128 18:41:06.967116 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerStarted","Data":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.016655 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.167945581 podStartE2EDuration="16.016625531s" podCreationTimestamp="2026-01-28 18:40:51 +0000 UTC" firstStartedPulling="2026-01-28 18:40:52.003461174 +0000 UTC m=+1662.830023995" lastFinishedPulling="2026-01-28 18:41:05.852141124 +0000 UTC m=+1676.678703945" observedRunningTime="2026-01-28 18:41:06.993861778 +0000 UTC m=+1677.820424609" watchObservedRunningTime="2026-01-28 18:41:07.016625531 +0000 UTC m=+1677.843188392" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.023093 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-sn5lq" podStartSLOduration=2.813192667 podStartE2EDuration="16.023069103s" podCreationTimestamp="2026-01-28 18:40:51 +0000 UTC" firstStartedPulling="2026-01-28 18:40:52.760435855 +0000 UTC m=+1663.586998676" lastFinishedPulling="2026-01-28 18:41:05.970312251 +0000 UTC m=+1676.796875112" observedRunningTime="2026-01-28 18:41:07.01765409 +0000 UTC m=+1677.844216921" watchObservedRunningTime="2026-01-28 18:41:07.023069103 +0000 UTC m=+1677.849631924" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.458601 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584101 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584201 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.584674 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") pod \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\" (UID: \"aabefa44-123b-48ce-a38b-8c5f6ed32b73\") " Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.590318 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts" (OuterVolumeSpecName: "scripts") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.590509 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7" (OuterVolumeSpecName: "kube-api-access-j8nl7") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "kube-api-access-j8nl7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.619875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.647081 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data" (OuterVolumeSpecName: "config-data") pod "aabefa44-123b-48ce-a38b-8c5f6ed32b73" (UID: "aabefa44-123b-48ce-a38b-8c5f6ed32b73"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687560 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687596 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687609 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8nl7\" (UniqueName: \"kubernetes.io/projected/aabefa44-123b-48ce-a38b-8c5f6ed32b73-kube-api-access-j8nl7\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.687623 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aabefa44-123b-48ce-a38b-8c5f6ed32b73-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-559zx" event={"ID":"aabefa44-123b-48ce-a38b-8c5f6ed32b73","Type":"ContainerDied","Data":"870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067"} Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981811 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="870660ec8bc3c0314dc1037dd620996db52b2a1745a86024589f75d20c716067" Jan 28 18:41:07 crc kubenswrapper[4985]: I0128 18:41:07.981772 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-559zx" Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163363 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163763 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" containerID="cri-o://1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.163902 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" containerID="cri-o://091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.207388 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.208179 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" containerID="cri-o://047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.221978 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.222322 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" containerID="cri-o://dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.222465 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" containerID="cri-o://a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" gracePeriod=30 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.996858 4985 generic.go:334] "Generic (PLEG): container finished" podID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerID="1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" exitCode=143 Jan 28 18:41:08 crc kubenswrapper[4985]: I0128 18:41:08.997170 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc"} Jan 28 18:41:09 crc kubenswrapper[4985]: I0128 18:41:09.002402 4985 generic.go:334] "Generic (PLEG): container finished" podID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" exitCode=143 Jan 28 18:41:09 crc kubenswrapper[4985]: I0128 18:41:09.003679 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} Jan 28 18:41:11 crc kubenswrapper[4985]: I0128 18:41:11.619741 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:11 crc kubenswrapper[4985]: I0128 18:41:11.620098 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.034769 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.086201 4985 generic.go:334] "Generic (PLEG): container finished" podID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerID="116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" exitCode=137 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.086241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.092267 4985 generic.go:334] "Generic (PLEG): container finished" podID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerID="091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.092376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097907 4985 generic.go:334] "Generic (PLEG): container finished" podID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097968 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.097993 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"9aa1f962-f78d-41dc-a567-7c749f53ce57","Type":"ContainerDied","Data":"beb681875d1b031fab542c0f8d59f502b25e7da8eb5f0f02c317251a2c3309d0"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.098002 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.098011 4985 scope.go:117] "RemoveContainer" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109415 4985 generic.go:334] "Generic (PLEG): container finished" podID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" exitCode=0 Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109450 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerDied","Data":"047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2"} Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.109979 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.150583 4985 scope.go:117] "RemoveContainer" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.195967 4985 scope.go:117] "RemoveContainer" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.196844 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": container with ID starting with a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9 not found: ID does not exist" containerID="a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.196920 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9"} err="failed to get container status \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": rpc error: code = NotFound desc = could not find container \"a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9\": container with ID starting with a20ab3ceb34c1fe528e62410a5713fe09c476d04429deec98fa1f5e0300943e9 not found: ID does not exist" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.196947 4985 scope.go:117] "RemoveContainer" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.197458 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": container with ID starting with dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937 not found: ID does not exist" containerID="dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.197891 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937"} err="failed to get container status \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": rpc error: code = NotFound desc = could not find container \"dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937\": container with ID starting with dd8443c743ef7f52c5f1891fe3338f54004b45f1e7ee946d174e378be8928937 not found: ID does not exist" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211110 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211172 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211474 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211543 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211579 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211663 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211854 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") pod \"9aa1f962-f78d-41dc-a567-7c749f53ce57\" (UID: \"9aa1f962-f78d-41dc-a567-7c749f53ce57\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211889 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.211950 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") pod \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\" (UID: \"1901b8df-d418-45ea-8d73-c6ffbf3a0da5\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.219331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598" (OuterVolumeSpecName: "kube-api-access-2c598") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "kube-api-access-2c598". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.226190 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz" (OuterVolumeSpecName: "kube-api-access-2r4dz") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "kube-api-access-2r4dz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.227008 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts" (OuterVolumeSpecName: "scripts") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.231838 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs" (OuterVolumeSpecName: "logs") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.271772 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data" (OuterVolumeSpecName: "config-data") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.295527 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.297054 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9aa1f962-f78d-41dc-a567-7c749f53ce57" (UID: "9aa1f962-f78d-41dc-a567-7c749f53ce57"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315710 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315746 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2c598\" (UniqueName: \"kubernetes.io/projected/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-kube-api-access-2c598\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315756 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315766 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9aa1f962-f78d-41dc-a567-7c749f53ce57-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315773 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9aa1f962-f78d-41dc-a567-7c749f53ce57-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315782 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r4dz\" (UniqueName: \"kubernetes.io/projected/9aa1f962-f78d-41dc-a567-7c749f53ce57-kube-api-access-2r4dz\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.315790 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.364435 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.405586 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.408331 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data" (OuterVolumeSpecName: "config-data") pod "1901b8df-d418-45ea-8d73-c6ffbf3a0da5" (UID: "1901b8df-d418-45ea-8d73-c6ffbf3a0da5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.417971 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.418005 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1901b8df-d418-45ea-8d73-c6ffbf3a0da5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.475707 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.476528 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.477881 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.477917 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519171 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519444 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519529 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519690 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.519765 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") pod \"7258e3aa-2eb9-4bc7-a143-76946c12b889\" (UID: \"7258e3aa-2eb9-4bc7-a143-76946c12b889\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.520075 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs" (OuterVolumeSpecName: "logs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.520521 4985 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7258e3aa-2eb9-4bc7-a143-76946c12b889-logs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.536605 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n" (OuterVolumeSpecName: "kube-api-access-5jb8n") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "kube-api-access-5jb8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.579676 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data" (OuterVolumeSpecName: "config-data") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.585732 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623860 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jb8n\" (UniqueName: \"kubernetes.io/projected/7258e3aa-2eb9-4bc7-a143-76946c12b889-kube-api-access-5jb8n\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623896 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.623910 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.638453 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.643343 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "7258e3aa-2eb9-4bc7-a143-76946c12b889" (UID: "7258e3aa-2eb9-4bc7-a143-76946c12b889"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.673442 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:12 crc kubenswrapper[4985]: > Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.728537 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.728607 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/7258e3aa-2eb9-4bc7-a143-76946c12b889-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.746363 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.769458 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.805474 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830007 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830365 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.830406 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") pod \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\" (UID: \"938ef95c-9a4f-4f1e-b92c-8c16f0043102\") " Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.832338 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833349 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833383 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833407 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833415 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833432 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833610 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833632 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833647 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833663 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="init" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833671 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="init" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833694 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833703 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833718 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833726 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833744 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833753 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833769 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833777 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833793 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833800 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833819 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833827 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: E0128 18:41:12.833865 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.833874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834202 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834225 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" containerName="nova-metadata-metadata" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-notifier" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834265 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" containerName="nova-manage" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" containerName="nova-scheduler-scheduler" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834307 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-log" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834318 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" containerName="nova-api-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834336 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-listener" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834348 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-evaluator" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834363 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9f67a31-4cc7-4f2e-8f62-77c3c058b2c0" containerName="dnsmasq-dns" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.834374 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" containerName="aodh-api" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.836787 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.841029 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.841335 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.852474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl" (OuterVolumeSpecName: "kube-api-access-xphwl") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "kube-api-access-xphwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.866854 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.882931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data" (OuterVolumeSpecName: "config-data") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.892335 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "938ef95c-9a4f-4f1e-b92c-8c16f0043102" (UID: "938ef95c-9a4f-4f1e-b92c-8c16f0043102"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934427 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934529 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.934863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.935157 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.935687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938096 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938145 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xphwl\" (UniqueName: \"kubernetes.io/projected/938ef95c-9a4f-4f1e-b92c-8c16f0043102-kube-api-access-xphwl\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:12 crc kubenswrapper[4985]: I0128 18:41:12.938164 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/938ef95c-9a4f-4f1e-b92c-8c16f0043102-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040440 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040792 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.040893 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.041557 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7d99eaa1-3945-4192-9d61-7668d944bc63-logs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.046278 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-config-data\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.047070 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.050933 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/7d99eaa1-3945-4192-9d61-7668d944bc63-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.061858 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7zwn\" (UniqueName: \"kubernetes.io/projected/7d99eaa1-3945-4192-9d61-7668d944bc63-kube-api-access-t7zwn\") pod \"nova-metadata-0\" (UID: \"7d99eaa1-3945-4192-9d61-7668d944bc63\") " pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.137582 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.137568 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"938ef95c-9a4f-4f1e-b92c-8c16f0043102","Type":"ContainerDied","Data":"8d462a40beef6fc701ba91c721938ba8a5ec0c9999812346c5f163a3e951b156"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.138347 4985 scope.go:117] "RemoveContainer" containerID="047e49fb740d3728b2028c43797afba2c5712fd239c4d5f5d399c254bdc7fda2" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.141812 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"1901b8df-d418-45ea-8d73-c6ffbf3a0da5","Type":"ContainerDied","Data":"0e67457eae33c25cf3a4581aecdd202fe5ea7cb4f78ba1758d22e2ed33abfd6b"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.141822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.144342 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7258e3aa-2eb9-4bc7-a143-76946c12b889","Type":"ContainerDied","Data":"c90565f788cfb36cdadf74a3373459a040e9f918b36e0c76ca75c9290bca74e9"} Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.144484 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.194943 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.198124 4985 scope.go:117] "RemoveContainer" containerID="116b4a8f5e3104f46338144e21ea08411d9e0947488b95acdc8fa986fd480e55" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.202322 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.225465 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.241884 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.255468 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.257351 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.263599 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.296981 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="938ef95c-9a4f-4f1e-b92c-8c16f0043102" path="/var/lib/kubelet/pods/938ef95c-9a4f-4f1e-b92c-8c16f0043102/volumes" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.297838 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9aa1f962-f78d-41dc-a567-7c749f53ce57" path="/var/lib/kubelet/pods/9aa1f962-f78d-41dc-a567-7c749f53ce57/volumes" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.298521 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.298552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.303566 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.323344 4985 scope.go:117] "RemoveContainer" containerID="45ae2f94d58662256dd9e3846658d96a9b1c7b7c477db901916e216192ebd2f3" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.325722 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.331201 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.332784 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336021 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336231 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336373 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.336385 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.340178 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.348472 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.349658 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.349693 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.359192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.366107 4985 scope.go:117] "RemoveContainer" containerID="5fe594e43016038bb82553490c959e421cf981ca7b939b3fb56693d76b19142d" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.379225 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.383107 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.388408 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.388787 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.389031 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.389436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.426432 4985 scope.go:117] "RemoveContainer" containerID="cb1badf43fc5d99f4394e22eeadf7de3507d22dd49f7bc8d099cbb13b55d6eea" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.452945 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454070 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454101 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454274 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454340 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454531 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454564 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.454635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.465081 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.469798 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdade9ba-ba1b-4093-bc40-73f68c84615f-config-data\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.477897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68dv9\" (UniqueName: \"kubernetes.io/projected/bdade9ba-ba1b-4093-bc40-73f68c84615f-kube-api-access-68dv9\") pod \"nova-scheduler-0\" (UID: \"bdade9ba-ba1b-4093-bc40-73f68c84615f\") " pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556772 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556859 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.556997 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557068 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557368 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557388 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557414 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557469 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.557517 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566132 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.566324 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.572759 4985 scope.go:117] "RemoveContainer" containerID="091866b67d722b85f66b348b87fcb2e2785f91d8fccccba9f3e2b09885d4aade" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.575480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.591706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.596149 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.597239 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"aodh-0\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.654368 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.659511 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660583 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.660606 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.661148 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.661517 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/11eaf6b3-7169-4587-af33-68f04428e630-logs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.663049 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-internal-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.666085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.666131 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-public-tls-certs\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.667847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/11eaf6b3-7169-4587-af33-68f04428e630-config-data\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.682857 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t86tb\" (UniqueName: \"kubernetes.io/projected/11eaf6b3-7169-4587-af33-68f04428e630-kube-api-access-t86tb\") pod \"nova-api-0\" (UID: \"11eaf6b3-7169-4587-af33-68f04428e630\") " pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.738630 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.755073 4985 scope.go:117] "RemoveContainer" containerID="1a251a8091ad2d86f44bec193d866720c1dfdcafe9383258c1b57b5edba7d8dc" Jan 28 18:41:13 crc kubenswrapper[4985]: I0128 18:41:13.817494 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 18:41:13 crc kubenswrapper[4985]: W0128 18:41:13.843706 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7d99eaa1_3945_4192_9d61_7668d944bc63.slice/crio-c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06 WatchSource:0}: Error finding container c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06: Status 404 returned error can't find the container with id c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.132991 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 18:41:14 crc kubenswrapper[4985]: W0128 18:41:14.142444 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdade9ba_ba1b_4093_bc40_73f68c84615f.slice/crio-11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5 WatchSource:0}: Error finding container 11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5: Status 404 returned error can't find the container with id 11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.162079 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"cb4e77165d4c242fceac190bae312018f4bf5ba3d1b964f0f395b55804829001"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.162130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"c1b1b1c71c676689b1ac66ce78be53cc98fe0ab7f13cfd561d142c3e25661d06"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.164435 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bdade9ba-ba1b-4093-bc40-73f68c84615f","Type":"ContainerStarted","Data":"11e7d1976880568de7f209c49fdfd85b93388e3216f4220a67f14e8fe47407b5"} Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.287580 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:41:14 crc kubenswrapper[4985]: W0128 18:41:14.381623 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod11eaf6b3_7169_4587_af33_68f04428e630.slice/crio-034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586 WatchSource:0}: Error finding container 034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586: Status 404 returned error can't find the container with id 034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586 Jan 28 18:41:14 crc kubenswrapper[4985]: I0128 18:41:14.390928 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184566 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"2da76c7b42f6e653a658e564cc2f54b45b0ed659bf455b1ce5864b0d1b7b80db"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184914 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"42fd70cf0dd54e6443e4a2a0fa1c29031e80910dccef760736776a3c20cf849f"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.184933 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"11eaf6b3-7169-4587-af33-68f04428e630","Type":"ContainerStarted","Data":"034ae2c0070804e7db1906adfc26624a7f8fe2b13ab6eba51dcd4a3411e80586"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.188821 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"7d99eaa1-3945-4192-9d61-7668d944bc63","Type":"ContainerStarted","Data":"62c59e17b831dbd248c35901ed743b75a136fc04a9d8bdbf20cf7202fb2a2f48"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.192508 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"bdade9ba-ba1b-4093-bc40-73f68c84615f","Type":"ContainerStarted","Data":"1efca71695f8186c9bc5d99e0fbbf2c7fca3405a714627a13e17d76b0b7042a7"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.194507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"bc5e5343b1013225c0f09fa05053ffaef8f092c7d05aeab8940382306b98a83a"} Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.224286 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=3.224267618 podStartE2EDuration="3.224267618s" podCreationTimestamp="2026-01-28 18:41:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:15.211468766 +0000 UTC m=+1686.038031597" watchObservedRunningTime="2026-01-28 18:41:15.224267618 +0000 UTC m=+1686.050830439" Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.315825 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1901b8df-d418-45ea-8d73-c6ffbf3a0da5" path="/var/lib/kubelet/pods/1901b8df-d418-45ea-8d73-c6ffbf3a0da5/volumes" Jan 28 18:41:15 crc kubenswrapper[4985]: I0128 18:41:15.317776 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7258e3aa-2eb9-4bc7-a143-76946c12b889" path="/var/lib/kubelet/pods/7258e3aa-2eb9-4bc7-a143-76946c12b889/volumes" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.215824 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7"} Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.253610 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.253583024 podStartE2EDuration="3.253583024s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:16.244641911 +0000 UTC m=+1687.071204772" watchObservedRunningTime="2026-01-28 18:41:16.253583024 +0000 UTC m=+1687.080145845" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.261877 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=3.261853617 podStartE2EDuration="3.261853617s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:41:15.237187642 +0000 UTC m=+1686.063750463" watchObservedRunningTime="2026-01-28 18:41:16.261853617 +0000 UTC m=+1687.088416438" Jan 28 18:41:16 crc kubenswrapper[4985]: I0128 18:41:16.264239 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:16 crc kubenswrapper[4985]: E0128 18:41:16.264726 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:17 crc kubenswrapper[4985]: I0128 18:41:17.238022 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895"} Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.204455 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.205483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 18:41:18 crc kubenswrapper[4985]: I0128 18:41:18.597119 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 18:41:19 crc kubenswrapper[4985]: I0128 18:41:19.262230 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65"} Jan 28 18:41:21 crc kubenswrapper[4985]: I0128 18:41:21.475073 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.320973 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerStarted","Data":"3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731"} Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.349496 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=2.788268824 podStartE2EDuration="9.349478909s" podCreationTimestamp="2026-01-28 18:41:13 +0000 UTC" firstStartedPulling="2026-01-28 18:41:14.305120963 +0000 UTC m=+1685.131683784" lastFinishedPulling="2026-01-28 18:41:20.866331028 +0000 UTC m=+1691.692893869" observedRunningTime="2026-01-28 18:41:22.343277704 +0000 UTC m=+1693.169840535" watchObservedRunningTime="2026-01-28 18:41:22.349478909 +0000 UTC m=+1693.176041740" Jan 28 18:41:22 crc kubenswrapper[4985]: I0128 18:41:22.689182 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:22 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:22 crc kubenswrapper[4985]: > Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.204377 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.204619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.597197 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.628687 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.738458 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:23 crc kubenswrapper[4985]: I0128 18:41:23.738507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.221559 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d99eaa1-3945-4192-9d61-7668d944bc63" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.221612 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="7d99eaa1-3945-4192-9d61-7668d944bc63" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.1.6:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.382645 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.750513 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11eaf6b3-7169-4587-af33-68f04428e630" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:24 crc kubenswrapper[4985]: I0128 18:41:24.760465 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="11eaf6b3-7169-4587-af33-68f04428e630" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.1.9:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:41:29 crc kubenswrapper[4985]: I0128 18:41:29.264544 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:29 crc kubenswrapper[4985]: E0128 18:41:29.265223 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:32 crc kubenswrapper[4985]: I0128 18:41:32.667169 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" probeResult="failure" output=< Jan 28 18:41:32 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:41:32 crc kubenswrapper[4985]: > Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.210085 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.215613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.217504 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.490960 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.746665 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.746786 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.747470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.747507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.756985 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:41:33 crc kubenswrapper[4985]: I0128 18:41:33.758923 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.670571 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.734099 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:41 crc kubenswrapper[4985]: I0128 18:41:41.925103 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.632289 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-sn5lq" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" containerID="cri-o://a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" gracePeriod=2 Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.836925 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.858749 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-qjrfx"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.918185 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.920217 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:43 crc kubenswrapper[4985]: I0128 18:41:43.946437 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.012898 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.012958 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.013097 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.115911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.116182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.116362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.123725 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.124053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.135592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"heat-db-sync-r7ml7\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.245732 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.260135 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.263374 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.263656 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422864 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.422984 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") pod \"fe3dd10e-5081-4256-9c08-e2be3557bf65\" (UID: \"fe3dd10e-5081-4256-9c08-e2be3557bf65\") " Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.423921 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities" (OuterVolumeSpecName: "utilities") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.424493 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.427406 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m" (OuterVolumeSpecName: "kube-api-access-blw8m") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "kube-api-access-blw8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.491825 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fe3dd10e-5081-4256-9c08-e2be3557bf65" (UID: "fe3dd10e-5081-4256-9c08-e2be3557bf65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.526728 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blw8m\" (UniqueName: \"kubernetes.io/projected/fe3dd10e-5081-4256-9c08-e2be3557bf65-kube-api-access-blw8m\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.526773 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fe3dd10e-5081-4256-9c08-e2be3557bf65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644297 4985 generic.go:334] "Generic (PLEG): container finished" podID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" exitCode=0 Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644351 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644417 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-sn5lq" event={"ID":"fe3dd10e-5081-4256-9c08-e2be3557bf65","Type":"ContainerDied","Data":"1118f0c768bed110a3a9b05d6637f78ab5e21ee7e674a7222c90a1b7f83294fd"} Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644415 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-sn5lq" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.644443 4985 scope.go:117] "RemoveContainer" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.678744 4985 scope.go:117] "RemoveContainer" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.689770 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.700909 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-sn5lq"] Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.715343 4985 scope.go:117] "RemoveContainer" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738236 4985 scope.go:117] "RemoveContainer" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.738694 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": container with ID starting with a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849 not found: ID does not exist" containerID="a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738726 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849"} err="failed to get container status \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": rpc error: code = NotFound desc = could not find container \"a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849\": container with ID starting with a1e5e70ca53d6e5b9b052802d01db17b27bd6ca4fb557ee3484d2affdd7bf849 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.738747 4985 scope.go:117] "RemoveContainer" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.739177 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": container with ID starting with e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136 not found: ID does not exist" containerID="e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739225 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136"} err="failed to get container status \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": rpc error: code = NotFound desc = could not find container \"e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136\": container with ID starting with e68a3c28344d1db667ba325f372c58d1d6313d4c18c62f500b098f85cb074136 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739277 4985 scope.go:117] "RemoveContainer" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: E0128 18:41:44.739612 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": container with ID starting with 83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724 not found: ID does not exist" containerID="83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.739644 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724"} err="failed to get container status \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": rpc error: code = NotFound desc = could not find container \"83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724\": container with ID starting with 83ed03ca8e92a1f8d81caae6cf576f85b8172feda82d640830a154bd41f4f724 not found: ID does not exist" Jan 28 18:41:44 crc kubenswrapper[4985]: I0128 18:41:44.807306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.304339 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dda9fdbc-ce81-4e63-b32f-733379d893d4" path="/var/lib/kubelet/pods/dda9fdbc-ce81-4e63-b32f-733379d893d4/volumes" Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.307072 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" path="/var/lib/kubelet/pods/fe3dd10e-5081-4256-9c08-e2be3557bf65/volumes" Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.675780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerStarted","Data":"319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f"} Jan 28 18:41:45 crc kubenswrapper[4985]: I0128 18:41:45.779153 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.643453 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644079 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" containerID="cri-o://b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644105 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" containerID="cri-o://4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644192 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" containerID="cri-o://2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" gracePeriod=30 Jan 28 18:41:46 crc kubenswrapper[4985]: I0128 18:41:46.644432 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" containerID="cri-o://fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" gracePeriod=30 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.092432 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.705991 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" exitCode=0 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706030 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" exitCode=2 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706043 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" exitCode=0 Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706055 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236"} Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba"} Jan 28 18:41:47 crc kubenswrapper[4985]: I0128 18:41:47.706102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0"} Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.746793 4985 generic.go:334] "Generic (PLEG): container finished" podID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerID="2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" exitCode=0 Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.747383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0"} Jan 28 18:41:48 crc kubenswrapper[4985]: I0128 18:41:48.970540 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156553 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156853 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.156886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157050 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157164 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157210 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") pod \"9079aa62-2b93-4559-bff4-af80b69e23a7\" (UID: \"9079aa62-2b93-4559-bff4-af80b69e23a7\") " Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.157850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.158205 4985 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.161017 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.176649 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll" (OuterVolumeSpecName: "kube-api-access-5vnll") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "kube-api-access-5vnll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.177038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts" (OuterVolumeSpecName: "scripts") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262189 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262267 4985 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9079aa62-2b93-4559-bff4-af80b69e23a7-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.262279 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vnll\" (UniqueName: \"kubernetes.io/projected/9079aa62-2b93-4559-bff4-af80b69e23a7-kube-api-access-5vnll\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.272438 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.303089 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.328562 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.364483 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.365230 4985 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.365280 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.408440 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data" (OuterVolumeSpecName: "config-data") pod "9079aa62-2b93-4559-bff4-af80b69e23a7" (UID: "9079aa62-2b93-4559-bff4-af80b69e23a7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.469513 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9079aa62-2b93-4559-bff4-af80b69e23a7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9079aa62-2b93-4559-bff4-af80b69e23a7","Type":"ContainerDied","Data":"99f6a59231cb74972d7065e16a91981feb750820d3a47ac21d46c1a8419a7fb5"} Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763232 4985 scope.go:117] "RemoveContainer" containerID="fece39157ded0ea37a252872cc2390f006a1bb017033fdc56f58780de2bd7236" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.763236 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.860463 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.881556 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.908355 4985 scope.go:117] "RemoveContainer" containerID="4649bed0f7e2d88fd12f9c7284945a04a799e7c6515875078e092e9a5114b1ba" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.950064 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951320 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951337 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951363 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951371 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951392 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951400 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951557 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-utilities" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951569 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-utilities" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951598 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951619 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951632 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951640 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: E0128 18:41:49.951654 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-content" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.951662 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="extract-content" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952033 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="proxy-httpd" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952073 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="sg-core" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952093 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe3dd10e-5081-4256-9c08-e2be3557bf65" containerName="registry-server" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952111 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-central-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.952140 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" containerName="ceilometer-notification-agent" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.955173 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.959595 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.959851 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.963827 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 18:41:49 crc kubenswrapper[4985]: I0128 18:41:49.969185 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.017616 4985 scope.go:117] "RemoveContainer" containerID="2abc407d0b012d9d9eec8a48e74a309321192094aaee78b70f6990073a7856e0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.084505 4985 scope.go:117] "RemoveContainer" containerID="b945ecd85cb2d6c7bb07e875ec3e1e57a0f59ee2eb03cf09cfc003be7f2c0ad0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.094951 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095120 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095425 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.095759 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096034 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096079 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.096183 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198890 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.198958 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199004 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199098 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199477 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199570 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199924 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-run-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.199993 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-log-httpd\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.203863 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.204614 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.204794 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-config-data\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.210790 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.217365 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-scripts\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.227850 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzxcg\" (UniqueName: \"kubernetes.io/projected/b29b2a3b-ca12-4e1c-8816-0d28cebe2dde-kube-api-access-mzxcg\") pod \"ceilometer-0\" (UID: \"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde\") " pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.297956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.870179 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 18:41:50 crc kubenswrapper[4985]: I0128 18:41:50.941207 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" containerID="cri-o://1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" gracePeriod=604795 Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.282389 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9079aa62-2b93-4559-bff4-af80b69e23a7" path="/var/lib/kubelet/pods/9079aa62-2b93-4559-bff4-af80b69e23a7/volumes" Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.714076 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" containerID="cri-o://aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" gracePeriod=604796 Jan 28 18:41:51 crc kubenswrapper[4985]: I0128 18:41:51.797772 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"034d5baa8d85116bd4079fc576f9bfd89326c5aef395eac6b4985a13d07cd61a"} Jan 28 18:41:57 crc kubenswrapper[4985]: I0128 18:41:57.935372 4985 generic.go:334] "Generic (PLEG): container finished" podID="9549037f-5867-44ac-86dc-a02105e4c414" containerID="1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" exitCode=0 Jan 28 18:41:57 crc kubenswrapper[4985]: I0128 18:41:57.935877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25"} Jan 28 18:41:58 crc kubenswrapper[4985]: I0128 18:41:58.951278 4985 generic.go:334] "Generic (PLEG): container finished" podID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerID="aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" exitCode=0 Jan 28 18:41:58 crc kubenswrapper[4985]: I0128 18:41:58.951379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c"} Jan 28 18:41:59 crc kubenswrapper[4985]: I0128 18:41:59.267137 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:41:59 crc kubenswrapper[4985]: E0128 18:41:59.267877 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.188369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.195283 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.297603 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306426 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306506 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306686 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306715 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.306769 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309120 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309191 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309226 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309292 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309327 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.309392 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.318799 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.318926 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319052 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319130 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319173 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.319282 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") pod \"41c1858c-ad6e-441f-b998-c57290cc5d68\" (UID: \"41c1858c-ad6e-441f-b998-c57290cc5d68\") " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.335855 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.336879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.343050 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.343202 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb" (OuterVolumeSpecName: "kube-api-access-pdmbb") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "kube-api-access-pdmbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352116 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql" (OuterVolumeSpecName: "kube-api-access-td8ql") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "kube-api-access-td8ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352334 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.352879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.353231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.358572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.379977 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info" (OuterVolumeSpecName: "pod-info") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.384917 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.430221 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde" (OuterVolumeSpecName: "persistence") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.433967 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434004 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-td8ql\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-kube-api-access-td8ql\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434016 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434026 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434034 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434046 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/41c1858c-ad6e-441f-b998-c57290cc5d68-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434055 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/41c1858c-ad6e-441f-b998-c57290cc5d68-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434064 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/9549037f-5867-44ac-86dc-a02105e4c414-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434074 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434086 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdmbb\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-kube-api-access-pdmbb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434096 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434104 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434132 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") on node \"crc\" " Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.434142 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.447287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info" (OuterVolumeSpecName: "pod-info") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.450394 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data" (OuterVolumeSpecName: "config-data") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: E0128 18:42:01.456682 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28 podName:9549037f-5867-44ac-86dc-a02105e4c414 nodeName:}" failed. No retries permitted until 2026-01-28 18:42:01.956652493 +0000 UTC m=+1732.783215314 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "persistence" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414") : kubernetes.io/csi: Unmounter.TearDownAt failed: rpc error: code = Unknown desc = check target path: could not get consistent content of /proc/mounts after 3 attempts Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.494525 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.515761 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde") on node "crc" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.533610 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf" (OuterVolumeSpecName: "server-conf") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.539603 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541596 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541833 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.541945 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/9549037f-5867-44ac-86dc-a02105e4c414-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.552889 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data" (OuterVolumeSpecName: "config-data") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.553009 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf" (OuterVolumeSpecName: "server-conf") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.594978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.610912 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "41c1858c-ad6e-441f-b998-c57290cc5d68" (UID: "41c1858c-ad6e-441f-b998-c57290cc5d68"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645116 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/9549037f-5867-44ac-86dc-a02105e4c414-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645148 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/41c1858c-ad6e-441f-b998-c57290cc5d68-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645159 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/41c1858c-ad6e-441f-b998-c57290cc5d68-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.645169 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/9549037f-5867-44ac-86dc-a02105e4c414-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989566 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"41c1858c-ad6e-441f-b998-c57290cc5d68","Type":"ContainerDied","Data":"f0ff3c53025b9ae422df2e7cccc0ec25b7dd495fd74546696ee043e91187bb41"} Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989610 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.989624 4985 scope.go:117] "RemoveContainer" containerID="aca2d63153078144b7f42a325b0b7ca02eb87cda15e02f68bf7871b8a8ca688c" Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.997921 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"9549037f-5867-44ac-86dc-a02105e4c414","Type":"ContainerDied","Data":"3743df7761e9f95626d5189d3a604fc7ae4f9d57706f392ce36c256fb508d124"} Jan 28 18:42:01 crc kubenswrapper[4985]: I0128 18:42:01.998046 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.029846 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.050281 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.053522 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"9549037f-5867-44ac-86dc-a02105e4c414\" (UID: \"9549037f-5867-44ac-86dc-a02105e4c414\") " Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.073834 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074358 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074375 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074406 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074412 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074434 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074440 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="setup-container" Jan 28 18:42:02 crc kubenswrapper[4985]: E0128 18:42:02.074456 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074462 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074696 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.074714 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.075879 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.078691 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.078882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079012 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079210 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-zs2dp" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079382 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.079517 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.082480 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.093416 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.095986 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28" (OuterVolumeSpecName: "persistence") pod "9549037f-5867-44ac-86dc-a02105e4c414" (UID: "9549037f-5867-44ac-86dc-a02105e4c414"). InnerVolumeSpecName "pvc-640fff7e-293b-4d54-bc96-a2aead370a28". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156361 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156430 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156532 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156588 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156663 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156794 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156826 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156914 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.156942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.157034 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") on node \"crc\" " Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.200500 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.201104 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-640fff7e-293b-4d54-bc96-a2aead370a28" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28") on node "crc" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.248875 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.261953 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262608 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262762 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.262913 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263123 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263626 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.264300 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.263750 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265645 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265682 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.265853 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.272955 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.279337 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288854 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288863 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.288925 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ac8bde78162f1032f95f647174ef8183aa4e0f86240347c6b6b8d4a86e7076a1/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.291839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.306812 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74lst\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-kube-api-access-74lst\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.308304 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/34d82dad-dc98-4c0f-90c2-0b25f7d73c01-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.329318 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.333811 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.357698 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374725 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374827 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374882 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374961 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.374981 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375143 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375191 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.375214 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.384650 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ef0e9afd-52f4-49f3-ab31-761a6da55cde\") pod \"rabbitmq-cell1-server-0\" (UID: \"34d82dad-dc98-4c0f-90c2-0b25f7d73c01\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476897 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476921 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476957 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.476980 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477008 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477082 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477727 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.477888 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-plugins\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.479122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-plugins-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.480966 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.481272 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-tls\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.482481 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.482535 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18da3f6437b5d54d0b067e2370e468c4fc3f3bb8be36828902e2b198f7e21ef1/globalmount\"" pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483169 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/249a0e05-d210-402f-b7f8-2caf153346d8-erlang-cookie-secret\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483494 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-config-data\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.483770 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/249a0e05-d210-402f-b7f8-2caf153346d8-server-conf\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.485245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/249a0e05-d210-402f-b7f8-2caf153346d8-pod-info\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.491067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-rabbitmq-confd\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.500918 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.502200 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5gk5\" (UniqueName: \"kubernetes.io/projected/249a0e05-d210-402f-b7f8-2caf153346d8-kube-api-access-v5gk5\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.563005 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-640fff7e-293b-4d54-bc96-a2aead370a28\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-640fff7e-293b-4d54-bc96-a2aead370a28\") pod \"rabbitmq-server-2\" (UID: \"249a0e05-d210-402f-b7f8-2caf153346d8\") " pod="openstack/rabbitmq-server-2" Jan 28 18:42:02 crc kubenswrapper[4985]: I0128 18:42:02.710630 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-2" Jan 28 18:42:03 crc kubenswrapper[4985]: I0128 18:42:03.282133 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" path="/var/lib/kubelet/pods/41c1858c-ad6e-441f-b998-c57290cc5d68/volumes" Jan 28 18:42:03 crc kubenswrapper[4985]: I0128 18:42:03.283822 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9549037f-5867-44ac-86dc-a02105e4c414" path="/var/lib/kubelet/pods/9549037f-5867-44ac-86dc-a02105e4c414/volumes" Jan 28 18:42:04 crc kubenswrapper[4985]: I0128 18:42:04.874700 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="9549037f-5867-44ac-86dc-a02105e4c414" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.131:5671: i/o timeout" Jan 28 18:42:04 crc kubenswrapper[4985]: I0128 18:42:04.977933 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="41c1858c-ad6e-441f-b998-c57290cc5d68" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.132:5671: i/o timeout" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.011685 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.012099 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.012234 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:heat-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested,Command:[/bin/bash],Args:[-c /usr/bin/heat-manage --config-dir /etc/heat/heat.conf.d db_sync],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/heat/heat.conf.d/00-default.conf,SubPath:00-default.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/heat/heat.conf.d/01-custom.conf,SubPath:01-custom.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-r9vtx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42418,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*42418,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod heat-db-sync-r7ml7_openstack(627220be-fa5f-49a6-9c9e-b3ae5e49afec): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.013993 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/heat-db-sync-r7ml7" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" Jan 28 18:42:09 crc kubenswrapper[4985]: E0128 18:42:09.105428 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"heat-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-heat-engine:current-tested\\\"\"" pod="openstack/heat-db-sync-r7ml7" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.346632 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.349501 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.353562 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.377034 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389458 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389530 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.389953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390075 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390184 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390221 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.390270 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492746 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492842 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492924 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492947 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492970 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.492996 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493910 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493911 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.493954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494146 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494443 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.494561 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.512319 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"dnsmasq-dns-5b75489c6f-h8w5d\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:09 crc kubenswrapper[4985]: I0128 18:42:09.680353 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:10 crc kubenswrapper[4985]: I0128 18:42:10.264344 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:10 crc kubenswrapper[4985]: E0128 18:42:10.264592 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.347198 4985 scope.go:117] "RemoveContainer" containerID="dfcb150ccda2aa4d1050a6d900540fe9f90c22d4f5256e19b6eeee11fa6e624a" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.435778 4985 scope.go:117] "RemoveContainer" containerID="1d8b169a7d964359c8bd6733d67d45546c1c642e159163c5b350061cce51fd25" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.563428 4985 scope.go:117] "RemoveContainer" containerID="bb84d317406cd6ce8331d52ba3971c969e272858edb60fe48bf5c6408f6194f8" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565792 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565844 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested" Jan 28 18:42:12 crc kubenswrapper[4985]: E0128 18:42:12.565977 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n59bh579h584hbbh688h68h596h647h655h79h55hcch688h694h59chc8h54chb5h8ch568hb7h59fh557hfdh5cbh6h57dh565h656h59h97h65fq,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mzxcg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(b29b2a3b-ca12-4e1c-8816-0d28cebe2dde): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:42:12 crc kubenswrapper[4985]: I0128 18:42:12.913215 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-2"] Jan 28 18:42:13 crc kubenswrapper[4985]: W0128 18:42:13.063936 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d82dad_dc98_4c0f_90c2_0b25f7d73c01.slice/crio-369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc WatchSource:0}: Error finding container 369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc: Status 404 returned error can't find the container with id 369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.066552 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 18:42:13 crc kubenswrapper[4985]: W0128 18:42:13.157145 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod851ea22a_e43d_4d11_911a_3ec541e6012c.slice/crio-da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e WatchSource:0}: Error finding container da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e: Status 404 returned error can't find the container with id da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.165759 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.170071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"369ff544797e7b294c7889921af8131e263a140baba36588e10421a395b5f4cc"} Jan 28 18:42:13 crc kubenswrapper[4985]: I0128 18:42:13.176479 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"280dd66feb159a68665caed63df71059c278506556427c060145287e1aedd726"} Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.193085 4985 generic.go:334] "Generic (PLEG): container finished" podID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerID="eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca" exitCode=0 Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.193176 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca"} Jan 28 18:42:14 crc kubenswrapper[4985]: I0128 18:42:14.194941 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerStarted","Data":"da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e"} Jan 28 18:42:15 crc kubenswrapper[4985]: I0128 18:42:15.207548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2"} Jan 28 18:42:15 crc kubenswrapper[4985]: I0128 18:42:15.210063 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.222483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerStarted","Data":"b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.222574 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.224166 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"93bb25f622215a35e032733b4664c5f7e5c37e8b8a11287fecbd4b3f644fd667"} Jan 28 18:42:16 crc kubenswrapper[4985]: I0128 18:42:16.244092 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podStartSLOduration=7.244055308 podStartE2EDuration="7.244055308s" podCreationTimestamp="2026-01-28 18:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:16.241874967 +0000 UTC m=+1747.068437798" watchObservedRunningTime="2026-01-28 18:42:16.244055308 +0000 UTC m=+1747.070618129" Jan 28 18:42:17 crc kubenswrapper[4985]: I0128 18:42:17.237038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"5a17d16c268530c17cf1806dfcce5123026714ba2b437c71a364b66d574ea617"} Jan 28 18:42:19 crc kubenswrapper[4985]: E0128 18:42:19.128528 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:19 crc kubenswrapper[4985]: E0128 18:42:19.271699 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:19 crc kubenswrapper[4985]: I0128 18:42:19.280747 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 18:42:19 crc kubenswrapper[4985]: I0128 18:42:19.281001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"635d9dd27d70f1ccd27643b26e2e470fccf963c850c9c5557eaab5edb814ab6d"} Jan 28 18:42:20 crc kubenswrapper[4985]: E0128 18:42:20.283347 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ceilometer-central-agent\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-central:current-tested\\\"\"" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" Jan 28 18:42:22 crc kubenswrapper[4985]: I0128 18:42:22.265104 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:22 crc kubenswrapper[4985]: E0128 18:42:22.265439 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:23 crc kubenswrapper[4985]: I0128 18:42:23.333989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerStarted","Data":"48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324"} Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.683306 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.719587 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-db-sync-r7ml7" podStartSLOduration=4.079966297 podStartE2EDuration="41.719553989s" podCreationTimestamp="2026-01-28 18:41:43 +0000 UTC" firstStartedPulling="2026-01-28 18:41:44.81789176 +0000 UTC m=+1715.644454581" lastFinishedPulling="2026-01-28 18:42:22.457479422 +0000 UTC m=+1753.284042273" observedRunningTime="2026-01-28 18:42:23.349703037 +0000 UTC m=+1754.176265888" watchObservedRunningTime="2026-01-28 18:42:24.719553989 +0000 UTC m=+1755.546116860" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.756939 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.757212 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" containerID="cri-o://8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" gracePeriod=10 Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.962962 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.971668 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:24 crc kubenswrapper[4985]: I0128 18:42:24.982375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085683 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085711 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085830 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.085966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.086523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.086700 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.157177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.1:5353: connect: connection refused" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189362 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189464 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189550 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189576 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189620 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.189657 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190834 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-nb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190835 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-openstack-edpm-ipam\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190843 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-svc\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190874 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-ovsdbserver-sb\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-config\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.190947 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-dns-swift-storage-0\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.210755 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtrfd\" (UniqueName: \"kubernetes.io/projected/63ee6cb7-f768-47d8-a266-e1e6ca6926ea-kube-api-access-qtrfd\") pod \"dnsmasq-dns-5d75f767dc-jqtwd\" (UID: \"63ee6cb7-f768-47d8-a266-e1e6ca6926ea\") " pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358423 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358455 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419"} Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.358420 4985 generic.go:334] "Generic (PLEG): container finished" podID="f33e23a8-5c59-41b1-9afe-00977f966724" containerID="8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" exitCode=0 Jan 28 18:42:25 crc kubenswrapper[4985]: I0128 18:42:25.939994 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5d75f767dc-jqtwd"] Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.206179 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323360 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323448 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323545 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323681 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.323725 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") pod \"f33e23a8-5c59-41b1-9afe-00977f966724\" (UID: \"f33e23a8-5c59-41b1-9afe-00977f966724\") " Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.361718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w" (OuterVolumeSpecName: "kube-api-access-qz55w") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "kube-api-access-qz55w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.425488 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerStarted","Data":"985432ad861af76eae71821d9a1f34274f7a37efd03e3e7cfd07d428e40635ab"} Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.452657 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz55w\" (UniqueName: \"kubernetes.io/projected/f33e23a8-5c59-41b1-9afe-00977f966724-kube-api-access-qz55w\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.460755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" event={"ID":"f33e23a8-5c59-41b1-9afe-00977f966724","Type":"ContainerDied","Data":"8a81f5a6bc9aeb4779fe5ba3167c9da81f9d6b2cee2d0a3316b0a2d07b8f7a9e"} Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.460810 4985 scope.go:117] "RemoveContainer" containerID="8dde278f7ddf86385d1f8ef9bd55566ee7c04f535897d358bb08d0218ee0c419" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.466451 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84f9ccf-mp4hr" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.500271 4985 scope.go:117] "RemoveContainer" containerID="fd29c92499411247c46e32f0f3619427bf7f15dbc9ff2205fbac7905d817aa90" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.511242 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.531824 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.542544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560321 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560350 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.560359 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.563681 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.590174 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config" (OuterVolumeSpecName: "config") pod "f33e23a8-5c59-41b1-9afe-00977f966724" (UID: "f33e23a8-5c59-41b1-9afe-00977f966724"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.662933 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.662966 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f33e23a8-5c59-41b1-9afe-00977f966724-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.805976 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:26 crc kubenswrapper[4985]: I0128 18:42:26.816453 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84f9ccf-mp4hr"] Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.278935 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" path="/var/lib/kubelet/pods/f33e23a8-5c59-41b1-9afe-00977f966724/volumes" Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.475607 4985 generic.go:334] "Generic (PLEG): container finished" podID="63ee6cb7-f768-47d8-a266-e1e6ca6926ea" containerID="53a1fab10c84910b7dae65cca8e794fd03ee543959c485919cd13d2287280a4a" exitCode=0 Jan 28 18:42:27 crc kubenswrapper[4985]: I0128 18:42:27.475694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerDied","Data":"53a1fab10c84910b7dae65cca8e794fd03ee543959c485919cd13d2287280a4a"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.491457 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" event={"ID":"63ee6cb7-f768-47d8-a266-e1e6ca6926ea","Type":"ContainerStarted","Data":"ef740412a8710735ab232783b3480fa853b94d1701dc6a4338aa95194f876a1e"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.491797 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.494467 4985 generic.go:334] "Generic (PLEG): container finished" podID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerID="48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324" exitCode=0 Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.494569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerDied","Data":"48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324"} Jan 28 18:42:28 crc kubenswrapper[4985]: I0128 18:42:28.523746 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" podStartSLOduration=4.523724261 podStartE2EDuration="4.523724261s" podCreationTimestamp="2026-01-28 18:42:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:28.512885145 +0000 UTC m=+1759.339447966" watchObservedRunningTime="2026-01-28 18:42:28.523724261 +0000 UTC m=+1759.350287092" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.013209 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.139049 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.141449 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.141703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") pod \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\" (UID: \"627220be-fa5f-49a6-9c9e-b3ae5e49afec\") " Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.186463 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx" (OuterVolumeSpecName: "kube-api-access-r9vtx") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "kube-api-access-r9vtx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.244371 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9vtx\" (UniqueName: \"kubernetes.io/projected/627220be-fa5f-49a6-9c9e-b3ae5e49afec-kube-api-access-r9vtx\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.249718 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.287135 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data" (OuterVolumeSpecName: "config-data") pod "627220be-fa5f-49a6-9c9e-b3ae5e49afec" (UID: "627220be-fa5f-49a6-9c9e-b3ae5e49afec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.348222 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.348339 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/627220be-fa5f-49a6-9c9e-b3ae5e49afec-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516168 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-db-sync-r7ml7" event={"ID":"627220be-fa5f-49a6-9c9e-b3ae5e49afec","Type":"ContainerDied","Data":"319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f"} Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516488 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="319bf1dcb8102c51957853cf08d45a01f4387e66993d72cad23092e9e3dddb4f" Jan 28 18:42:30 crc kubenswrapper[4985]: I0128 18:42:30.516205 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-db-sync-r7ml7" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.461135 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.462715 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.462805 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.462892 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.462982 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: E0128 18:42:31.463083 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="init" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463138 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="init" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463427 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f33e23a8-5c59-41b1-9afe-00977f966724" containerName="dnsmasq-dns" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.463544 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" containerName="heat-db-sync" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.464569 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474770 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474865 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.474984 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.475000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.481640 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.571286 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.574843 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588534 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.588719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.592360 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.594679 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598628 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598800 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.598860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.604056 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-combined-ca-bundle\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.604685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/45d84233-dc44-4b3c-8aaa-f08ab50c0512-config-data-custom\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.607825 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.614535 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.635942 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.654866 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4d2gw\" (UniqueName: \"kubernetes.io/projected/45d84233-dc44-4b3c-8aaa-f08ab50c0512-kube-api-access-4d2gw\") pod \"heat-engine-5df4f6c8f9-fvvqb\" (UID: \"45d84233-dc44-4b3c-8aaa-f08ab50c0512\") " pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701603 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.701732 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702009 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702341 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702719 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702812 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702890 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.702938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.703021 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706586 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-combined-ca-bundle\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706641 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data-custom\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.706732 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-internal-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.707935 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-public-tls-certs\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.709077 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f91275ab-50ad-4d69-953f-764ccd276927-config-data\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.720125 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bthlv\" (UniqueName: \"kubernetes.io/projected/f91275ab-50ad-4d69-953f-764ccd276927-kube-api-access-bthlv\") pod \"heat-api-9d696c4dd-qgm9g\" (UID: \"f91275ab-50ad-4d69-953f-764ccd276927\") " pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.786947 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.795878 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805183 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805277 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.805341 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.810046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data-custom\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.810108 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-internal-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.811402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-public-tls-certs\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.813183 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-combined-ca-bundle\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.822182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c761ae73-94d1-46be-afe6-1232e2c589ff-config-data\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.833543 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hw82v\" (UniqueName: \"kubernetes.io/projected/c761ae73-94d1-46be-afe6-1232e2c589ff-kube-api-access-hw82v\") pod \"heat-cfnapi-76b7548687-cmjrr\" (UID: \"c761ae73-94d1-46be-afe6-1232e2c589ff\") " pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:31 crc kubenswrapper[4985]: I0128 18:42:31.943371 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.210193 4985 scope.go:117] "RemoveContainer" containerID="00c5bac74e2813b5c78c4d3d883b158530767718be83285d64f4742a35e64806" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.249605 4985 scope.go:117] "RemoveContainer" containerID="d6979a9489721d74b8d4664bdfe5df656096756724de155696b85d31e7a0e2dd" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.276195 4985 scope.go:117] "RemoveContainer" containerID="e1a1c6117167cd879db9ae5539bf348a54302f9007388acd00fd5041acda647f" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.314669 4985 scope.go:117] "RemoveContainer" containerID="2a94f1b22150bff413a35eb8a3eed5745a2369fd30defeeb03ec8e8bb54d93e7" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.372632 4985 scope.go:117] "RemoveContainer" containerID="e79b0c26c13e421f90b1e346a7a6ed37fdf036d779d67dcae2b50acce53ce0c6" Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.376560 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-api-9d696c4dd-qgm9g"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.383896 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf91275ab_50ad_4d69_953f_764ccd276927.slice/crio-1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446 WatchSource:0}: Error finding container 1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446: Status 404 returned error can't find the container with id 1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446 Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.482269 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-engine-5df4f6c8f9-fvvqb"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.491409 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod45d84233_dc44_4b3c_8aaa_f08ab50c0512.slice/crio-f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69 WatchSource:0}: Error finding container f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69: Status 404 returned error can't find the container with id f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69 Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.577529 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" event={"ID":"45d84233-dc44-4b3c-8aaa-f08ab50c0512","Type":"ContainerStarted","Data":"f51930feb0bfbbeb832121e9e4781216b8bbecb150c7970083fc5b65973beb69"} Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.580443 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9d696c4dd-qgm9g" event={"ID":"f91275ab-50ad-4d69-953f-764ccd276927","Type":"ContainerStarted","Data":"1e84d2fdfda9eb21570c20068d0645bc7c30a765bde3ef192c7c127a0c127446"} Jan 28 18:42:32 crc kubenswrapper[4985]: I0128 18:42:32.616995 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/heat-cfnapi-76b7548687-cmjrr"] Jan 28 18:42:32 crc kubenswrapper[4985]: W0128 18:42:32.640870 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc761ae73_94d1_46be_afe6_1232e2c589ff.slice/crio-4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64 WatchSource:0}: Error finding container 4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64: Status 404 returned error can't find the container with id 4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64 Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.432180 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.598555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" event={"ID":"45d84233-dc44-4b3c-8aaa-f08ab50c0512","Type":"ContainerStarted","Data":"16d7bbbf380aa65bd61b4ca60ba79649324b3433bb594ef93b14cb608ada2e9e"} Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.598623 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.601280 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76b7548687-cmjrr" event={"ID":"c761ae73-94d1-46be-afe6-1232e2c589ff","Type":"ContainerStarted","Data":"4b3bf40734b089e34a337426f34fc284909961f1818e329a90c896087898df64"} Jan 28 18:42:33 crc kubenswrapper[4985]: I0128 18:42:33.619016 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podStartSLOduration=2.61899776 podStartE2EDuration="2.61899776s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:33.617532909 +0000 UTC m=+1764.444095730" watchObservedRunningTime="2026-01-28 18:42:33.61899776 +0000 UTC m=+1764.445560581" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.263866 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:35 crc kubenswrapper[4985]: E0128 18:42:35.264653 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.360452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5d75f767dc-jqtwd" Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.429724 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:35 crc kubenswrapper[4985]: I0128 18:42:35.430020 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" containerID="cri-o://b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" gracePeriod=10 Jan 28 18:42:36 crc kubenswrapper[4985]: I0128 18:42:36.638358 4985 generic.go:334] "Generic (PLEG): container finished" podID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerID="b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" exitCode=0 Jan 28 18:42:36 crc kubenswrapper[4985]: I0128 18:42:36.638438 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1"} Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.835498 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941558 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941596 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941714 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941786 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.941823 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") pod \"851ea22a-e43d-4d11-911a-3ec541e6012c\" (UID: \"851ea22a-e43d-4d11-911a-3ec541e6012c\") " Jan 28 18:42:39 crc kubenswrapper[4985]: I0128 18:42:39.946617 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc" (OuterVolumeSpecName: "kube-api-access-tdqmc") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "kube-api-access-tdqmc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.006330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.012893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.014433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config" (OuterVolumeSpecName: "config") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.016462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.024797 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.028946 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "851ea22a-e43d-4d11-911a-3ec541e6012c" (UID: "851ea22a-e43d-4d11-911a-3ec541e6012c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044868 4985 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-config\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044908 4985 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044919 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044928 4985 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044937 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdqmc\" (UniqueName: \"kubernetes.io/projected/851ea22a-e43d-4d11-911a-3ec541e6012c-kube-api-access-tdqmc\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044944 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.044954 4985 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/851ea22a-e43d-4d11-911a-3ec541e6012c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693136 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" event={"ID":"851ea22a-e43d-4d11-911a-3ec541e6012c","Type":"ContainerDied","Data":"da98239627d3370ef27352d22f95238ce0d007f495ebc106572103880ba5c81e"} Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693190 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.693533 4985 scope.go:117] "RemoveContainer" containerID="b3dcf3d6435bc5a5ddd83babff8c6655e9b838e1a714aff2a291f7cb27e62bf1" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.728521 4985 scope.go:117] "RemoveContainer" containerID="eb06142e49a896d0f59b1509119df8e1b80f5b08d70235e7d7d845632e5598ca" Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.904951 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:40 crc kubenswrapper[4985]: I0128 18:42:40.917718 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b75489c6f-h8w5d"] Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.277959 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" path="/var/lib/kubelet/pods/851ea22a-e43d-4d11-911a-3ec541e6012c/volumes" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.710325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.717897 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-9d696c4dd-qgm9g" event={"ID":"f91275ab-50ad-4d69-953f-764ccd276927","Type":"ContainerStarted","Data":"6203296a26a2c0a12ed531e57f672d48f72672c1daf4b6cc8e1eddd5624419f3"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.718945 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.721046 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-76b7548687-cmjrr" event={"ID":"c761ae73-94d1-46be-afe6-1232e2c589ff","Type":"ContainerStarted","Data":"ad10a5387e49bec4b95c22f76fa4f6f5cc81171c5d425cf4b816d1158ff80871"} Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.721771 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.753015 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=3.16369897 podStartE2EDuration="52.752993057s" podCreationTimestamp="2026-01-28 18:41:49 +0000 UTC" firstStartedPulling="2026-01-28 18:41:50.890097276 +0000 UTC m=+1721.716660097" lastFinishedPulling="2026-01-28 18:42:40.479391363 +0000 UTC m=+1771.305954184" observedRunningTime="2026-01-28 18:42:41.737362836 +0000 UTC m=+1772.563925657" watchObservedRunningTime="2026-01-28 18:42:41.752993057 +0000 UTC m=+1772.579555878" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.789384 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podStartSLOduration=2.975901329 podStartE2EDuration="10.789364874s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="2026-01-28 18:42:32.660457914 +0000 UTC m=+1763.487020725" lastFinishedPulling="2026-01-28 18:42:40.473921449 +0000 UTC m=+1771.300484270" observedRunningTime="2026-01-28 18:42:41.777003045 +0000 UTC m=+1772.603565876" watchObservedRunningTime="2026-01-28 18:42:41.789364874 +0000 UTC m=+1772.615927695" Jan 28 18:42:41 crc kubenswrapper[4985]: I0128 18:42:41.832424 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/heat-api-9d696c4dd-qgm9g" podStartSLOduration=2.744886996 podStartE2EDuration="10.83240351s" podCreationTimestamp="2026-01-28 18:42:31 +0000 UTC" firstStartedPulling="2026-01-28 18:42:32.386614511 +0000 UTC m=+1763.213177332" lastFinishedPulling="2026-01-28 18:42:40.474131025 +0000 UTC m=+1771.300693846" observedRunningTime="2026-01-28 18:42:41.808897526 +0000 UTC m=+1772.635460347" watchObservedRunningTime="2026-01-28 18:42:41.83240351 +0000 UTC m=+1772.658966331" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.682290 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-5b75489c6f-h8w5d" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.1.14:5353: i/o timeout" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860362 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:44 crc kubenswrapper[4985]: E0128 18:42:44.860829 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860847 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: E0128 18:42:44.860864 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="init" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.860870 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="init" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.861115 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="851ea22a-e43d-4d11-911a-3ec541e6012c" containerName="dnsmasq-dns" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.862338 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864207 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864476 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.864718 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.865151 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.880410 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.991925 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:44 crc kubenswrapper[4985]: I0128 18:42:44.992076 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094046 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094144 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.094221 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.124425 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.125717 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.126151 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.128845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:45 crc kubenswrapper[4985]: I0128 18:42:45.189107 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:42:46 crc kubenswrapper[4985]: I0128 18:42:46.264491 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:46 crc kubenswrapper[4985]: E0128 18:42:46.265496 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:42:46 crc kubenswrapper[4985]: I0128 18:42:46.876805 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk"] Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.799530 4985 generic.go:334] "Generic (PLEG): container finished" podID="249a0e05-d210-402f-b7f8-2caf153346d8" containerID="0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168" exitCode=0 Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.799608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerDied","Data":"0d9684f3d4336ae71b1f9fdea81d833a3ce461b76f547f6c936c89097d189168"} Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.802441 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerStarted","Data":"7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e"} Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.805343 4985 generic.go:334] "Generic (PLEG): container finished" podID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerID="e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2" exitCode=0 Jan 28 18:42:47 crc kubenswrapper[4985]: I0128 18:42:47.805383 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerDied","Data":"e1d8d938a013e14e34718ea005c62adcdafbd122068babd1c11dc5a7c1422bf2"} Jan 28 18:42:49 crc kubenswrapper[4985]: I0128 18:42:49.839001 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"34d82dad-dc98-4c0f-90c2-0b25f7d73c01","Type":"ContainerStarted","Data":"070f57a18fdf2335b2c740c37fb18af687ed8b76af622c39d8ddd22e8fd2e739"} Jan 28 18:42:49 crc kubenswrapper[4985]: I0128 18:42:49.843023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-2" event={"ID":"249a0e05-d210-402f-b7f8-2caf153346d8","Type":"ContainerStarted","Data":"4d6bbe15fc0df126779e519f528cf5aa83fcff2224b5d45454ef6fbcd9ad0297"} Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.853892 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.854748 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-2" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.877793 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=48.877774912 podStartE2EDuration="48.877774912s" podCreationTimestamp="2026-01-28 18:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:50.876034132 +0000 UTC m=+1781.702596993" watchObservedRunningTime="2026-01-28 18:42:50.877774912 +0000 UTC m=+1781.704337743" Jan 28 18:42:50 crc kubenswrapper[4985]: I0128 18:42:50.910878 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-2" podStartSLOduration=48.910861146 podStartE2EDuration="48.910861146s" podCreationTimestamp="2026-01-28 18:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:42:50.898847717 +0000 UTC m=+1781.725410548" watchObservedRunningTime="2026-01-28 18:42:50.910861146 +0000 UTC m=+1781.737423957" Jan 28 18:42:51 crc kubenswrapper[4985]: I0128 18:42:51.949855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" Jan 28 18:42:52 crc kubenswrapper[4985]: I0128 18:42:52.015164 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:42:52 crc kubenswrapper[4985]: I0128 18:42:52.015386 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" containerID="cri-o://c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" gracePeriod=60 Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.811622 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-9d696c4dd-qgm9g" podUID="f91275ab-50ad-4d69-953f-764ccd276927" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.17:8004/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.811660 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-api-9d696c4dd-qgm9g" podUID="f91275ab-50ad-4d69-953f-764ccd276927" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.1.17:8004/healthcheck\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.957053 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podUID="c761ae73-94d1-46be-afe6-1232e2c589ff" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.18:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:56 crc kubenswrapper[4985]: I0128 18:42:56.957525 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-cfnapi-76b7548687-cmjrr" podUID="c761ae73-94d1-46be-afe6-1232e2c589ff" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.1.18:8000/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 18:42:57 crc kubenswrapper[4985]: I0128 18:42:57.264500 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:42:57 crc kubenswrapper[4985]: E0128 18:42:57.265171 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.787016 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.789274 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.790773 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:00 crc kubenswrapper[4985]: E0128 18:43:00.790840 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:43:00 crc kubenswrapper[4985]: I0128 18:43:00.987407 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-cfnapi-76b7548687-cmjrr" Jan 28 18:43:00 crc kubenswrapper[4985]: I0128 18:43:00.989211 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/heat-api-9d696c4dd-qgm9g" Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.088991 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.089318 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" containerID="cri-o://ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" gracePeriod=60 Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.109950 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:01 crc kubenswrapper[4985]: I0128 18:43:01.110399 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" containerID="cri-o://df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" gracePeriod=60 Jan 28 18:43:02 crc kubenswrapper[4985]: I0128 18:43:02.512405 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:02 crc kubenswrapper[4985]: I0128 18:43:02.713220 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:04 crc kubenswrapper[4985]: I0128 18:43:04.773502 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": read tcp 10.217.0.2:59510->10.217.0.222:8000: read: connection reset by peer" Jan 28 18:43:04 crc kubenswrapper[4985]: I0128 18:43:04.785900 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": read tcp 10.217.0.2:43948->10.217.0.221:8004: read: connection reset by peer" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.506315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.520394 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-hgpsv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.919077 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.920883 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.924758 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:43:05 crc kubenswrapper[4985]: I0128 18:43:05.936303 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.029628 4985 generic.go:334] "Generic (PLEG): container finished" podID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerID="ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" exitCode=0 Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.029717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerDied","Data":"ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433"} Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.031692 4985 generic.go:334] "Generic (PLEG): container finished" podID="261340dd-15fd-43d9-8db3-3de095d8728a" containerID="df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" exitCode=0 Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.031734 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerDied","Data":"df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195"} Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081295 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081396 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.081460 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.184635 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185006 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185085 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.185139 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.199715 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.199829 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.200115 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.201620 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"aodh-db-sync-6bqfv\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:06 crc kubenswrapper[4985]: I0128 18:43:06.294821 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:43:07 crc kubenswrapper[4985]: I0128 18:43:07.560737 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7decce21-e84c-4501-bf0d-ca01387c51ee" path="/var/lib/kubelet/pods/7decce21-e84c-4501-bf0d-ca01387c51ee/volumes" Jan 28 18:43:08 crc kubenswrapper[4985]: I0128 18:43:08.385501 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 28 18:43:08 crc kubenswrapper[4985]: I0128 18:43:08.385567 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.085192 4985 generic.go:334] "Generic (PLEG): container finished" podID="a907310b-926c-4b8e-b3db-b8a43844891c" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" exitCode=0 Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.085307 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerDied","Data":"c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321"} Jan 28 18:43:10 crc kubenswrapper[4985]: I0128 18:43:10.265127 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.265885 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.785968 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.786220 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.788643 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" cmd=["/usr/bin/pgrep","-r","DRST","heat-engine"] Jan 28 18:43:10 crc kubenswrapper[4985]: E0128 18:43:10.788684 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321 is running failed: container process not found" probeType="Readiness" pod="openstack/heat-engine-54bf646c6-b6zb2" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:43:12 crc kubenswrapper[4985]: I0128 18:43:12.502304 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:12 crc kubenswrapper[4985]: I0128 18:43:12.712425 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.385725 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-api-78f74b8b49-ngj6j" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" probeResult="failure" output="Get \"https://10.217.0.221:8004/healthcheck\": dial tcp 10.217.0.221:8004: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.385853 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.386322 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" probeResult="failure" output="Get \"https://10.217.0.222:8000/healthcheck\": dial tcp 10.217.0.222:8000: connect: connection refused" Jan 28 18:43:13 crc kubenswrapper[4985]: I0128 18:43:13.386543 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.755776 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.756568 4985 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 18:43:16 crc kubenswrapper[4985]: container &Container{Name:repo-setup-edpm-deployment-openstack-edpm-ipam,Image:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,Command:[],Args:[ansible-runner run /runner -p playbook.yaml -i repo-setup-edpm-deployment-openstack-edpm-ipam],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ANSIBLE_VERBOSITY,Value:2,ValueFrom:nil,},EnvVar{Name:RUNNER_PLAYBOOK,Value: Jan 28 18:43:16 crc kubenswrapper[4985]: - hosts: all Jan 28 18:43:16 crc kubenswrapper[4985]: strategy: linear Jan 28 18:43:16 crc kubenswrapper[4985]: tasks: Jan 28 18:43:16 crc kubenswrapper[4985]: - name: Enable podified-repos Jan 28 18:43:16 crc kubenswrapper[4985]: become: true Jan 28 18:43:16 crc kubenswrapper[4985]: ansible.builtin.shell: | Jan 28 18:43:16 crc kubenswrapper[4985]: set -euxo pipefail Jan 28 18:43:16 crc kubenswrapper[4985]: pushd /var/tmp Jan 28 18:43:16 crc kubenswrapper[4985]: curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz Jan 28 18:43:16 crc kubenswrapper[4985]: pushd repo-setup-main Jan 28 18:43:16 crc kubenswrapper[4985]: python3 -m venv ./venv Jan 28 18:43:16 crc kubenswrapper[4985]: PBR_VERSION=0.0.0 ./venv/bin/pip install ./ Jan 28 18:43:16 crc kubenswrapper[4985]: ./venv/bin/repo-setup current-podified -b antelope Jan 28 18:43:16 crc kubenswrapper[4985]: popd Jan 28 18:43:16 crc kubenswrapper[4985]: rm -rf repo-setup-main Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: ,ValueFrom:nil,},EnvVar{Name:RUNNER_EXTRA_VARS,Value: Jan 28 18:43:16 crc kubenswrapper[4985]: edpm_override_hosts: openstack-edpm-ipam Jan 28 18:43:16 crc kubenswrapper[4985]: edpm_service_type: repo-setup Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: Jan 28 18:43:16 crc kubenswrapper[4985]: ,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:repo-setup-combined-ca-bundle,ReadOnly:false,MountPath:/var/lib/openstack/cacerts/repo-setup,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key-openstack-edpm-ipam,ReadOnly:false,MountPath:/runner/env/ssh_key/ssh_key_openstack-edpm-ipam,SubPath:ssh_key_openstack-edpm-ipam,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:inventory,ReadOnly:false,MountPath:/runner/inventory/hosts,SubPath:inventory,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h6897,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:openstack-aee-default-env,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk_openstack(7a5d3484-2192-44a6-b632-5a683af945d6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 28 18:43:16 crc kubenswrapper[4985]: > logger="UnhandledError" Jan 28 18:43:16 crc kubenswrapper[4985]: E0128 18:43:16.758019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" Jan 28 18:43:17 crc kubenswrapper[4985]: E0128 18:43:17.204588 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"repo-setup-edpm-deployment-openstack-edpm-ipam\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest\\\"\"" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.522330 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.665525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.680612 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.707703 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820560 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820611 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820639 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820759 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820871 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820897 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") pod \"261340dd-15fd-43d9-8db3-3de095d8728a\" (UID: \"261340dd-15fd-43d9-8db3-3de095d8728a\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.820985 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821004 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821107 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821123 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821146 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821170 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.821217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") pod \"a907310b-926c-4b8e-b3db-b8a43844891c\" (UID: \"a907310b-926c-4b8e-b3db-b8a43844891c\") " Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.848018 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj" (OuterVolumeSpecName: "kube-api-access-7kccj") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "kube-api-access-7kccj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.850674 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.850736 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.852032 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z" (OuterVolumeSpecName: "kube-api-access-jnf9z") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "kube-api-access-jnf9z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.853063 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd" (OuterVolumeSpecName: "kube-api-access-sxzqd") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "kube-api-access-sxzqd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.874534 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.891222 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.899448 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.922038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923866 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sxzqd\" (UniqueName: \"kubernetes.io/projected/a907310b-926c-4b8e-b3db-b8a43844891c-kube-api-access-sxzqd\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923899 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923912 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923923 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923935 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923946 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923957 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jnf9z\" (UniqueName: \"kubernetes.io/projected/261340dd-15fd-43d9-8db3-3de095d8728a-kube-api-access-jnf9z\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923968 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kccj\" (UniqueName: \"kubernetes.io/projected/f0c2a92a-343c-42fa-a740-8bb10701d271-kube-api-access-7kccj\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.923978 4985 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.938531 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data" (OuterVolumeSpecName: "config-data") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.951119 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "261340dd-15fd-43d9-8db3-3de095d8728a" (UID: "261340dd-15fd-43d9-8db3-3de095d8728a"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:17 crc kubenswrapper[4985]: I0128 18:43:17.960560 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data" (OuterVolumeSpecName: "config-data") pod "a907310b-926c-4b8e-b3db-b8a43844891c" (UID: "a907310b-926c-4b8e-b3db-b8a43844891c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.025478 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.025575 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026525 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026645 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data" (OuterVolumeSpecName: "config-data") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:18 crc kubenswrapper[4985]: W0128 18:43:18.026725 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes/kubernetes.io~secret/combined-ca-bundle Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026734 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026737 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") pod \"f0c2a92a-343c-42fa-a740-8bb10701d271\" (UID: \"f0c2a92a-343c-42fa-a740-8bb10701d271\") " Jan 28 18:43:18 crc kubenswrapper[4985]: W0128 18:43:18.026888 4985 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes/kubernetes.io~secret/internal-tls-certs Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.026902 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f0c2a92a-343c-42fa-a740-8bb10701d271" (UID: "f0c2a92a-343c-42fa-a740-8bb10701d271"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027774 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027799 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a907310b-926c-4b8e-b3db-b8a43844891c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027811 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027822 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027831 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027839 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0c2a92a-343c-42fa-a740-8bb10701d271-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.027847 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/261340dd-15fd-43d9-8db3-3de095d8728a-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235554 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-api-78f74b8b49-ngj6j" event={"ID":"261340dd-15fd-43d9-8db3-3de095d8728a","Type":"ContainerDied","Data":"21398e04f7c58bcaa01a9d450633b9dd30bf48b5e1dde83202d275ec2b22003a"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235846 4985 scope.go:117] "RemoveContainer" containerID="df4c3bf440a91085353fe1dff162d3bc31eb707fce7be15716ee9580c55e1195" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.235997 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-api-78f74b8b49-ngj6j" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.269483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" event={"ID":"f0c2a92a-343c-42fa-a740-8bb10701d271","Type":"ContainerDied","Data":"949f1904b14ba2cbd62ce6062414ba4496f2a1480543442a29b61571a29497fd"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.269548 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-cfnapi-db4c676cd-xbwzr" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.286510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerStarted","Data":"ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.306763 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/heat-engine-54bf646c6-b6zb2" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.307031 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.307075 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/heat-engine-54bf646c6-b6zb2" event={"ID":"a907310b-926c-4b8e-b3db-b8a43844891c","Type":"ContainerDied","Data":"c2cd5ecab7f62d49a442677c7f74b95e91134604fb9c330ec7bb5b250544e223"} Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.329853 4985 scope.go:117] "RemoveContainer" containerID="ff2e4ede92f22c252052c669b18beaa2f7fba2ec3c7930654e6336cf8415f433" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.344303 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-api-78f74b8b49-ngj6j"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.373315 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.374542 4985 scope.go:117] "RemoveContainer" containerID="c01f7ecaba454c3a9034dfc45d8aa4c1e6652f9b862d7ae1e99cedf01d672321" Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.422771 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-cfnapi-db4c676cd-xbwzr"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.443691 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:43:18 crc kubenswrapper[4985]: I0128 18:43:18.456280 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-engine-54bf646c6-b6zb2"] Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.279776 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" path="/var/lib/kubelet/pods/261340dd-15fd-43d9-8db3-3de095d8728a/volumes" Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.280688 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" path="/var/lib/kubelet/pods/a907310b-926c-4b8e-b3db-b8a43844891c/volumes" Jan 28 18:43:19 crc kubenswrapper[4985]: I0128 18:43:19.281265 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" path="/var/lib/kubelet/pods/f0c2a92a-343c-42fa-a740-8bb10701d271/volumes" Jan 28 18:43:22 crc kubenswrapper[4985]: I0128 18:43:22.502562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:22 crc kubenswrapper[4985]: I0128 18:43:22.712613 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:23 crc kubenswrapper[4985]: I0128 18:43:23.267125 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:23 crc kubenswrapper[4985]: E0128 18:43:23.267540 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.504394 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="34d82dad-dc98-4c0f-90c2-0b25f7d73c01" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.12:5671: connect: connection refused" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.712117 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:32 crc kubenswrapper[4985]: I0128 18:43:32.734657 4985 scope.go:117] "RemoveContainer" containerID="d7223a7a628a68fecc17a7f4ec70d47a10ad7c02ac73f8bb90091f9b898b7963" Jan 28 18:43:34 crc kubenswrapper[4985]: I0128 18:43:34.264551 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:34 crc kubenswrapper[4985]: E0128 18:43:34.265477 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:35 crc kubenswrapper[4985]: I0128 18:43:35.830498 4985 scope.go:117] "RemoveContainer" containerID="16a274b711b7c65f8bac3402c7e48f9e20237b3e266544fb803379dddb341a3e" Jan 28 18:43:35 crc kubenswrapper[4985]: I0128 18:43:35.929969 4985 scope.go:117] "RemoveContainer" containerID="66f1056465a2a42e3f35e272ee20feffc3abdbca774c043c1fecefff9950bd98" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.009140 4985 scope.go:117] "RemoveContainer" containerID="f090f667713f31e333608c60874aca9b174e0dc6eb4e52fb2779980ecf229992" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.048997 4985 scope.go:117] "RemoveContainer" containerID="12e6aacaa8527f36ddf49eb87d558411736fa67a95ae92f557207b934aed3337" Jan 28 18:43:36 crc kubenswrapper[4985]: I0128 18:43:36.163863 4985 scope.go:117] "RemoveContainer" containerID="9509d6e218ba21bbc37656ba000006afdb482de8a139625efa29d73de7dc2a95" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955407 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955524 4985 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.955760 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:aodh-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:AodhPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:AodhPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:aodh-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fmkqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42402,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod aodh-db-sync-6bqfv_openstack(d276e0b0-f662-443c-a126-003ee44287c8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 18:43:37 crc kubenswrapper[4985]: E0128 18:43:37.957612 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/aodh-db-sync-6bqfv" podUID="d276e0b0-f662-443c-a126-003ee44287c8" Jan 28 18:43:38 crc kubenswrapper[4985]: E0128 18:43:38.578081 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"aodh-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-aodh-api:current-tested\\\"\"" pod="openstack/aodh-db-sync-6bqfv" podUID="d276e0b0-f662-443c-a126-003ee44287c8" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.504159 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.712044 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-2" podUID="249a0e05-d210-402f-b7f8-2caf153346d8" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.1.13:5671: connect: connection refused" Jan 28 18:43:42 crc kubenswrapper[4985]: I0128 18:43:42.851882 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:43:43 crc kubenswrapper[4985]: I0128 18:43:43.648892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerStarted","Data":"e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1"} Jan 28 18:43:43 crc kubenswrapper[4985]: I0128 18:43:43.683281 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" podStartSLOduration=3.713064809 podStartE2EDuration="59.683221315s" podCreationTimestamp="2026-01-28 18:42:44 +0000 UTC" firstStartedPulling="2026-01-28 18:42:46.879393127 +0000 UTC m=+1777.705955938" lastFinishedPulling="2026-01-28 18:43:42.849549623 +0000 UTC m=+1833.676112444" observedRunningTime="2026-01-28 18:43:43.664181077 +0000 UTC m=+1834.490743898" watchObservedRunningTime="2026-01-28 18:43:43.683221315 +0000 UTC m=+1834.509784136" Jan 28 18:43:49 crc kubenswrapper[4985]: I0128 18:43:49.264953 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:43:49 crc kubenswrapper[4985]: E0128 18:43:49.265574 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:43:52 crc kubenswrapper[4985]: I0128 18:43:52.712453 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-2" Jan 28 18:43:52 crc kubenswrapper[4985]: I0128 18:43:52.788339 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:43:54 crc kubenswrapper[4985]: I0128 18:43:54.798599 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 18:43:55 crc kubenswrapper[4985]: I0128 18:43:55.842220 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerStarted","Data":"7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d"} Jan 28 18:43:55 crc kubenswrapper[4985]: I0128 18:43:55.866049 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-db-sync-6bqfv" podStartSLOduration=13.634935864 podStartE2EDuration="50.86602383s" podCreationTimestamp="2026-01-28 18:43:05 +0000 UTC" firstStartedPulling="2026-01-28 18:43:17.564603861 +0000 UTC m=+1808.391166682" lastFinishedPulling="2026-01-28 18:43:54.795691827 +0000 UTC m=+1845.622254648" observedRunningTime="2026-01-28 18:43:55.85891706 +0000 UTC m=+1846.685479881" watchObservedRunningTime="2026-01-28 18:43:55.86602383 +0000 UTC m=+1846.692586671" Jan 28 18:43:57 crc kubenswrapper[4985]: I0128 18:43:57.457066 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" containerID="cri-o://40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" gracePeriod=604796 Jan 28 18:43:58 crc kubenswrapper[4985]: I0128 18:43:58.879236 4985 generic.go:334] "Generic (PLEG): container finished" podID="7a5d3484-2192-44a6-b632-5a683af945d6" containerID="e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1" exitCode=0 Jan 28 18:43:58 crc kubenswrapper[4985]: I0128 18:43:58.879323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerDied","Data":"e803a48767e57173d8a437957c1d078418a2e9321f0bb9972b4c3e1e7fb17ef1"} Jan 28 18:43:59 crc kubenswrapper[4985]: I0128 18:43:59.860792 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-1" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.130:5671: connect: connection refused" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.124888 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.282436 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.282595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.283539 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.283610 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") pod \"7a5d3484-2192-44a6-b632-5a683af945d6\" (UID: \"7a5d3484-2192-44a6-b632-5a683af945d6\") " Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.287655 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897" (OuterVolumeSpecName: "kube-api-access-h6897") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "kube-api-access-h6897". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.287899 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.319480 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.327016 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory" (OuterVolumeSpecName: "inventory") pod "7a5d3484-2192-44a6-b632-5a683af945d6" (UID: "7a5d3484-2192-44a6-b632-5a683af945d6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386673 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386714 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386724 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7a5d3484-2192-44a6-b632-5a683af945d6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.386734 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h6897\" (UniqueName: \"kubernetes.io/projected/7a5d3484-2192-44a6-b632-5a683af945d6-kube-api-access-h6897\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.936822 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.936965 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk" event={"ID":"7a5d3484-2192-44a6-b632-5a683af945d6","Type":"ContainerDied","Data":"7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e"} Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.937019 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e" Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.941025 4985 generic.go:334] "Generic (PLEG): container finished" podID="d276e0b0-f662-443c-a126-003ee44287c8" containerID="7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d" exitCode=0 Jan 28 18:44:01 crc kubenswrapper[4985]: I0128 18:44:01.941111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerDied","Data":"7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d"} Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.195502 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5d3484_2192_44a6_b632_5a683af945d6.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7a5d3484_2192_44a6_b632_5a683af945d6.slice/crio-7c7a4afd6d6cdbdaa13f82b8cf1f686b4e15c7a50303b642026bcbf65746941e\": RecentStats: unable to find data in memory cache]" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.223715 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224456 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224481 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224499 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224505 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224531 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224537 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.224551 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224557 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224790 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a907310b-926c-4b8e-b3db-b8a43844891c" containerName="heat-engine" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224811 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0c2a92a-343c-42fa-a740-8bb10701d271" containerName="heat-cfnapi" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224832 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="261340dd-15fd-43d9-8db3-3de095d8728a" containerName="heat-api" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.224853 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a5d3484-2192-44a6-b632-5a683af945d6" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.225934 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229719 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229756 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229789 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.229724 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.251775 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.265283 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:02 crc kubenswrapper[4985]: E0128 18:44:02.265850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313666 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313848 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.313916 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416337 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416686 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.416983 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.424218 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.424471 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.438020 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xgv8j\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:02 crc kubenswrapper[4985]: I0128 18:44:02.549358 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.097433 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j"] Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.267164 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441171 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441384 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.441414 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.442211 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") pod \"d276e0b0-f662-443c-a126-003ee44287c8\" (UID: \"d276e0b0-f662-443c-a126-003ee44287c8\") " Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.447007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts" (OuterVolumeSpecName: "scripts") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.452098 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr" (OuterVolumeSpecName: "kube-api-access-fmkqr") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "kube-api-access-fmkqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.475281 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.484122 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data" (OuterVolumeSpecName: "config-data") pod "d276e0b0-f662-443c-a126-003ee44287c8" (UID: "d276e0b0-f662-443c-a126-003ee44287c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545531 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545572 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545585 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fmkqr\" (UniqueName: \"kubernetes.io/projected/d276e0b0-f662-443c-a126-003ee44287c8-kube-api-access-fmkqr\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.545603 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d276e0b0-f662-443c-a126-003ee44287c8-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.963646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerStarted","Data":"ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.963691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerStarted","Data":"99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.966977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-db-sync-6bqfv" event={"ID":"d276e0b0-f662-443c-a126-003ee44287c8","Type":"ContainerDied","Data":"ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf"} Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.967015 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecdfc8afa4f2b868f84dc5832f39a80a33774a8c5d26cccc6c2784958c84b2cf" Jan 28 18:44:03 crc kubenswrapper[4985]: I0128 18:44:03.967018 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-db-sync-6bqfv" Jan 28 18:44:04 crc kubenswrapper[4985]: I0128 18:44:04.001278 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" podStartSLOduration=1.520143616 podStartE2EDuration="2.001209411s" podCreationTimestamp="2026-01-28 18:44:02 +0000 UTC" firstStartedPulling="2026-01-28 18:44:03.098750777 +0000 UTC m=+1853.925313598" lastFinishedPulling="2026-01-28 18:44:03.579816572 +0000 UTC m=+1854.406379393" observedRunningTime="2026-01-28 18:44:03.986926078 +0000 UTC m=+1854.813488899" watchObservedRunningTime="2026-01-28 18:44:04.001209411 +0000 UTC m=+1854.827772232" Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.825577 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826211 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" containerID="cri-o://352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826382 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" containerID="cri-o://a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826473 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" containerID="cri-o://0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" gracePeriod=30 Jan 28 18:44:05 crc kubenswrapper[4985]: I0128 18:44:05.826682 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/aodh-0" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" containerID="cri-o://3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" gracePeriod=30 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.025321 4985 generic.go:334] "Generic (PLEG): container finished" podID="313d3857-140a-4a66-8329-12453fc8dd4c" containerID="40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.025409 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029031 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029054 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.029167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.032456 4985 generic.go:334] "Generic (PLEG): container finished" podID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerID="ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66" exitCode=0 Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.032491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerDied","Data":"ecdced9e50dc70f2eb69194df14784349ed0af2d4baa3abe5de9f65f07e14e66"} Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.297681 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459217 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459608 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459786 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459851 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459929 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.459994 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460901 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460965 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.460998 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") pod \"313d3857-140a-4a66-8329-12453fc8dd4c\" (UID: \"313d3857-140a-4a66-8329-12453fc8dd4c\") " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.461309 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.462800 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463393 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463563 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.463605 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.466060 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info" (OuterVolumeSpecName: "pod-info") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.471378 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.488211 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.488330 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc" (OuterVolumeSpecName: "kube-api-access-7t6vc") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "kube-api-access-7t6vc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.499982 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832" (OuterVolumeSpecName: "persistence") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "pvc-4b595522-7516-4d20-a11a-582dd7716832". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.501428 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data" (OuterVolumeSpecName: "config-data") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.557883 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf" (OuterVolumeSpecName: "server-conf") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566086 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566110 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566120 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/313d3857-140a-4a66-8329-12453fc8dd4c-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566129 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t6vc\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-kube-api-access-7t6vc\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566137 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566166 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") on node \"crc\" " Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566177 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/313d3857-140a-4a66-8329-12453fc8dd4c-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.566186 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/313d3857-140a-4a66-8329-12453fc8dd4c-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.607998 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.608193 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-4b595522-7516-4d20-a11a-582dd7716832" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832") on node "crc" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.615613 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "313d3857-140a-4a66-8329-12453fc8dd4c" (UID: "313d3857-140a-4a66-8329-12453fc8dd4c"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.668701 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:07 crc kubenswrapper[4985]: I0128 18:44:07.668751 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/313d3857-140a-4a66-8329-12453fc8dd4c-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"313d3857-140a-4a66-8329-12453fc8dd4c","Type":"ContainerDied","Data":"17211bf5e9b8b8c383ea958cf8ed251d1d40c28a9c6c3e8e814a8d59072a3363"} Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044163 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.044184 4985 scope.go:117] "RemoveContainer" containerID="40373a1abb092cff6ca0fd81aa96440eb2bcdae3ad3cb420a1cbe1ebb7f76247" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.097380 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.104663 4985 scope.go:117] "RemoveContainer" containerID="4546478e3b48ee65a1e4f5b248d4caed2739a0baae4f2cf1c67d5da021b79ce7" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.120286 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.146057 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147092 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147118 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147142 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="setup-container" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147151 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="setup-container" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.147196 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147205 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147567 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" containerName="rabbitmq" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.147599 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d276e0b0-f662-443c-a126-003ee44287c8" containerName="aodh-db-sync" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.149451 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.189343 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284359 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.284692 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287486 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287626 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287742 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.287924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.288128 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.291625 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395664 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395735 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395827 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395911 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.395941 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396027 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396105 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.396996 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-plugins-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397032 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-config-data\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397126 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397322 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.397493 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.398489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-plugins\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.404930 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/ae555e00-c2df-4fce-af07-a91133f8767d-server-conf\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.404954 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/ae555e00-c2df-4fce-af07-a91133f8767d-pod-info\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.406046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.407632 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-confd\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.408267 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.408291 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ce250563889cf210f76b1961aa7444b8cbe0d3f306db896236b924f9bdc2ed03/globalmount\"" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.411925 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-rabbitmq-tls\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.417971 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf27z\" (UniqueName: \"kubernetes.io/projected/ae555e00-c2df-4fce-af07-a91133f8767d-kube-api-access-zf27z\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.421995 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/ae555e00-c2df-4fce-af07-a91133f8767d-erlang-cookie-secret\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.537551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-4b595522-7516-4d20-a11a-582dd7716832\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-4b595522-7516-4d20-a11a-582dd7716832\") pod \"rabbitmq-server-1\" (UID: \"ae555e00-c2df-4fce-af07-a91133f8767d\") " pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.544502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-1" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.701088 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815265 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815794 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.815829 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.822927 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps" (OuterVolumeSpecName: "kube-api-access-5djps") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "kube-api-access-5djps". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:08 crc kubenswrapper[4985]: E0128 18:44:08.849483 4985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam podName:3b94af3f-603c-4a3e-966e-7a4bfbc78178 nodeName:}" failed. No retries permitted until 2026-01-28 18:44:09.349459576 +0000 UTC m=+1860.176022397 (durationBeforeRetry 500ms). Error: error cleaning subPath mounts for volume "ssh-key-openstack-edpm-ipam" (UniqueName: "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178") : error deleting /var/lib/kubelet/pods/3b94af3f-603c-4a3e-966e-7a4bfbc78178/volume-subpaths: remove /var/lib/kubelet/pods/3b94af3f-603c-4a3e-966e-7a4bfbc78178/volume-subpaths: no such file or directory Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.853918 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory" (OuterVolumeSpecName: "inventory") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.919469 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:08 crc kubenswrapper[4985]: I0128 18:44:08.919514 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5djps\" (UniqueName: \"kubernetes.io/projected/3b94af3f-603c-4a3e-966e-7a4bfbc78178-kube-api-access-5djps\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.046667 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-1"] Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059673 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xgv8j" event={"ID":"3b94af3f-603c-4a3e-966e-7a4bfbc78178","Type":"ContainerDied","Data":"99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43"} Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.059886 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99e90286bb93168beee09d961f200ea37eff2b69082fa47f4c51a1f62dd08a43" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.127710 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:09 crc kubenswrapper[4985]: E0128 18:44:09.128186 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.128203 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.128467 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b94af3f-603c-4a3e-966e-7a4bfbc78178" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.129306 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.170483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227336 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227382 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227641 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.227747 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.277150 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="313d3857-140a-4a66-8329-12453fc8dd4c" path="/var/lib/kubelet/pods/313d3857-140a-4a66-8329-12453fc8dd4c/volumes" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.329736 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.329969 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.330146 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.330205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.333754 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.333916 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.334112 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.355320 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.431511 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") pod \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\" (UID: \"3b94af3f-603c-4a3e-966e-7a4bfbc78178\") " Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.434742 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3b94af3f-603c-4a3e-966e-7a4bfbc78178" (UID: "3b94af3f-603c-4a3e-966e-7a4bfbc78178"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.458417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:44:09 crc kubenswrapper[4985]: I0128 18:44:09.535104 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3b94af3f-603c-4a3e-966e-7a4bfbc78178-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.045314 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx"] Jan 28 18:44:10 crc kubenswrapper[4985]: W0128 18:44:10.045774 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3865f1db_f707_4b28_bbf2_8ce1975baa1f.slice/crio-1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b WatchSource:0}: Error finding container 1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b: Status 404 returned error can't find the container with id 1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.073503 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"cc0f2c6847c1a9b5425f85e49cf7204693ce4a7d7259a408948f5275caec3ac2"} Jan 28 18:44:10 crc kubenswrapper[4985]: I0128 18:44:10.075391 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerStarted","Data":"1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.099677 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" exitCode=0 Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100254 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerID="0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" exitCode=0 Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100135 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.100395 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.104109 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.106000 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerStarted","Data":"bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef"} Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.151133 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" podStartSLOduration=1.568404565 podStartE2EDuration="2.151110099s" podCreationTimestamp="2026-01-28 18:44:09 +0000 UTC" firstStartedPulling="2026-01-28 18:44:10.04836429 +0000 UTC m=+1860.874927111" lastFinishedPulling="2026-01-28 18:44:10.631069824 +0000 UTC m=+1861.457632645" observedRunningTime="2026-01-28 18:44:11.127370189 +0000 UTC m=+1861.953933030" watchObservedRunningTime="2026-01-28 18:44:11.151110099 +0000 UTC m=+1861.977672930" Jan 28 18:44:11 crc kubenswrapper[4985]: I0128 18:44:11.866530 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011001 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011195 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011242 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011367 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011404 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.011464 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") pod \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\" (UID: \"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e\") " Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.020068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9" (OuterVolumeSpecName: "kube-api-access-rndb9") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "kube-api-access-rndb9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.027401 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts" (OuterVolumeSpecName: "scripts") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.087299 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.109790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114728 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rndb9\" (UniqueName: \"kubernetes.io/projected/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-kube-api-access-rndb9\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114776 4985 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114791 4985 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.114805 4985 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.140659 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.140962 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e","Type":"ContainerDied","Data":"bc5e5343b1013225c0f09fa05053ffaef8f092c7d05aeab8940382306b98a83a"} Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.141136 4985 scope.go:117] "RemoveContainer" containerID="3f619d361f2082394dafaa75e905aac02d4c442e242a675a1f30d1c46ea1e731" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.182482 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data" (OuterVolumeSpecName: "config-data") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.213579 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" (UID: "3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.217103 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.217135 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.266338 4985 scope.go:117] "RemoveContainer" containerID="0ca922d725193f731de31c12f898c60af2c134f41e240b2f16a4ae9def302a65" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.298720 4985 scope.go:117] "RemoveContainer" containerID="a5427ec62937c76e656c69cbc0cb1d25355ec92c6e45ce8c43e5e2fc0b2aa895" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.326039 4985 scope.go:117] "RemoveContainer" containerID="352c03bb8c26c1882850fe5aac45fc2c005c430ba571346b869f13a0a01a7ae7" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.505313 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.525319 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.536724 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537397 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537422 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537446 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537454 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537476 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537483 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.537510 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537518 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537787 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-listener" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537814 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-api" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537829 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-evaluator" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.537840 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" containerName="aodh-notifier" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.540097 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.544937 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-internal-svc" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.545134 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-aodh-public-svc" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.545265 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-autoscaling-dockercfg-bbsjj" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.546569 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-config-data" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.546894 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"aodh-scripts" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.547816 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:12 crc kubenswrapper[4985]: E0128 18:44:12.579641 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3dfcde6a_1a5e_454b_8fdb_29b33c0bb80e.slice\": RecentStats: unable to find data in memory cache]" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744294 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744377 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744423 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744489 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744521 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.744597 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847672 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847734 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847767 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847805 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847828 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.847874 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.853188 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-scripts\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.855470 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-config-data\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.856187 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-combined-ca-bundle\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.857614 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-public-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.862440 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f75cd8d-6a02-43e4-8e58-92f8d024311b-internal-tls-certs\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:12 crc kubenswrapper[4985]: I0128 18:44:12.870328 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ht5s6\" (UniqueName: \"kubernetes.io/projected/9f75cd8d-6a02-43e4-8e58-92f8d024311b-kube-api-access-ht5s6\") pod \"aodh-0\" (UID: \"9f75cd8d-6a02-43e4-8e58-92f8d024311b\") " pod="openstack/aodh-0" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.165235 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/aodh-0" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.278983 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e" path="/var/lib/kubelet/pods/3dfcde6a-1a5e-454b-8fdb-29b33c0bb80e/volumes" Jan 28 18:44:13 crc kubenswrapper[4985]: I0128 18:44:13.669218 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/aodh-0"] Jan 28 18:44:14 crc kubenswrapper[4985]: I0128 18:44:14.163874 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"599f433e6e07f7f29b55761c870470f88d9785648c856771468211fdd5b0b9d5"} Jan 28 18:44:15 crc kubenswrapper[4985]: I0128 18:44:15.179030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"6a16e29998d0204774709ad186ac56ea5ecfa8ddcb3a94af744722bfa2f69164"} Jan 28 18:44:16 crc kubenswrapper[4985]: I0128 18:44:16.198751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"2ee75215963d47e4abc8bbc03a7bc027dbf8f4a5eb9d5f4a75453b2088dea6b2"} Jan 28 18:44:16 crc kubenswrapper[4985]: I0128 18:44:16.268228 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:16 crc kubenswrapper[4985]: E0128 18:44:16.269142 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:17 crc kubenswrapper[4985]: I0128 18:44:17.215988 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"f77be6508118811e0e0c175857c64ed4c215da705cbccb44e6e372e011e9bb6e"} Jan 28 18:44:19 crc kubenswrapper[4985]: I0128 18:44:19.239697 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/aodh-0" event={"ID":"9f75cd8d-6a02-43e4-8e58-92f8d024311b","Type":"ContainerStarted","Data":"a8c39c795be1a0f809d3e3083127dedc1663461a6d6f386ad6a1df590232c344"} Jan 28 18:44:19 crc kubenswrapper[4985]: I0128 18:44:19.280994 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/aodh-0" podStartSLOduration=3.056400966 podStartE2EDuration="7.280975959s" podCreationTimestamp="2026-01-28 18:44:12 +0000 UTC" firstStartedPulling="2026-01-28 18:44:13.696025962 +0000 UTC m=+1864.522588783" lastFinishedPulling="2026-01-28 18:44:17.920600955 +0000 UTC m=+1868.747163776" observedRunningTime="2026-01-28 18:44:19.272098399 +0000 UTC m=+1870.098661280" watchObservedRunningTime="2026-01-28 18:44:19.280975959 +0000 UTC m=+1870.107538770" Jan 28 18:44:28 crc kubenswrapper[4985]: I0128 18:44:28.265433 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:28 crc kubenswrapper[4985]: E0128 18:44:28.266298 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.131582 4985 scope.go:117] "RemoveContainer" containerID="d27c06d418e20207c2740cbbbe652b37993ed962b6ece756db68f47e6fdcdfce" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.168230 4985 scope.go:117] "RemoveContainer" containerID="1c42c60ea57a6197ce6f5b78eaab66b518ac9296d9bfa8c605b8d293dcd46e71" Jan 28 18:44:37 crc kubenswrapper[4985]: I0128 18:44:37.244143 4985 scope.go:117] "RemoveContainer" containerID="c2123433fc9db86b4e9f9ac84736c01949000210bd3cce880a9a4ecb7af8212e" Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.268510 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.802652 4985 generic.go:334] "Generic (PLEG): container finished" podID="ae555e00-c2df-4fce-af07-a91133f8767d" containerID="3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed" exitCode=0 Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.802748 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerDied","Data":"3f596ee94730f42a50d8192fb4c5ca1568a36162c5e3f9d2fddd534fad4f30ed"} Jan 28 18:44:43 crc kubenswrapper[4985]: I0128 18:44:43.806009 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.823849 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-1" event={"ID":"ae555e00-c2df-4fce-af07-a91133f8767d","Type":"ContainerStarted","Data":"85ace350c9eb3209c1e405e7336cf4947ba7e03f10c6bdca9e56f9a095a2540e"} Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.824643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-1" Jan 28 18:44:44 crc kubenswrapper[4985]: I0128 18:44:44.868375 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-1" podStartSLOduration=36.868354673 podStartE2EDuration="36.868354673s" podCreationTimestamp="2026-01-28 18:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:44:44.854600744 +0000 UTC m=+1895.681163565" watchObservedRunningTime="2026-01-28 18:44:44.868354673 +0000 UTC m=+1895.694917494" Jan 28 18:44:58 crc kubenswrapper[4985]: I0128 18:44:58.549547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-1" Jan 28 18:44:58 crc kubenswrapper[4985]: I0128 18:44:58.680391 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.161106 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.163330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.177405 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.180505 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.180795 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235109 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235398 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.235684 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338501 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.338526 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.339920 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.352549 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.364243 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"collect-profiles-29493765-l92vx\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:00 crc kubenswrapper[4985]: I0128 18:45:00.553838 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:01 crc kubenswrapper[4985]: I0128 18:45:01.061804 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.013147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerStarted","Data":"e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164"} Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.013780 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerStarted","Data":"1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662"} Jan 28 18:45:02 crc kubenswrapper[4985]: I0128 18:45:02.028862 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" podStartSLOduration=2.028845228 podStartE2EDuration="2.028845228s" podCreationTimestamp="2026-01-28 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:02.028008134 +0000 UTC m=+1912.854570955" watchObservedRunningTime="2026-01-28 18:45:02.028845228 +0000 UTC m=+1912.855408049" Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.025994 4985 generic.go:334] "Generic (PLEG): container finished" podID="62198283-1005-48a7-91a7-44d4240224ef" containerID="e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164" exitCode=0 Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.026042 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerDied","Data":"e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164"} Jan 28 18:45:03 crc kubenswrapper[4985]: I0128 18:45:03.371674 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" containerID="cri-o://ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" gracePeriod=604796 Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.582826 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666182 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666377 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666760 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") pod \"62198283-1005-48a7-91a7-44d4240224ef\" (UID: \"62198283-1005-48a7-91a7-44d4240224ef\") " Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.666931 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume" (OuterVolumeSpecName: "config-volume") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.668041 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/62198283-1005-48a7-91a7-44d4240224ef-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.674231 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm" (OuterVolumeSpecName: "kube-api-access-j5fcm") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "kube-api-access-j5fcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.678508 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "62198283-1005-48a7-91a7-44d4240224ef" (UID: "62198283-1005-48a7-91a7-44d4240224ef"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.770372 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/62198283-1005-48a7-91a7-44d4240224ef-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:04 crc kubenswrapper[4985]: I0128 18:45:04.770411 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5fcm\" (UniqueName: \"kubernetes.io/projected/62198283-1005-48a7-91a7-44d4240224ef-kube-api-access-j5fcm\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" event={"ID":"62198283-1005-48a7-91a7-44d4240224ef","Type":"ContainerDied","Data":"1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662"} Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053619 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ce94eac799321de69e9c9fc5fc48746bb0c136d311f15aa248ff7840a09e662" Jan 28 18:45:05 crc kubenswrapper[4985]: I0128 18:45:05.053345 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.030516 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123636 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123735 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123886 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.123987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124089 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124128 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.124161 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.125497 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\" (UID: \"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541\") " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129594 4985 generic.go:334] "Generic (PLEG): container finished" podID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" exitCode=0 Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129653 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129687 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"8a4c48be-3f2f-4c2d-a0ba-2084caf7c541","Type":"ContainerDied","Data":"210b9569d6c0ecf168f35cbf15fa409f7c78272e84c7d067b7d52ec043eaaf23"} Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129707 4985 scope.go:117] "RemoveContainer" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129898 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.129991 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.130738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.132080 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.136828 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.144542 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw" (OuterVolumeSpecName: "kube-api-access-r4mrw") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "kube-api-access-r4mrw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.180738 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.191998 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info" (OuterVolumeSpecName: "pod-info") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.197640 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03" (OuterVolumeSpecName: "persistence") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.208328 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data" (OuterVolumeSpecName: "config-data") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237539 4985 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237586 4985 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237597 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237606 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237619 4985 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237632 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237676 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") on node \"crc\" " Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237695 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.237709 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r4mrw\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-kube-api-access-r4mrw\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.283305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf" (OuterVolumeSpecName: "server-conf") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.292992 4985 scope.go:117] "RemoveContainer" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.294974 4985 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.295216 4985 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03") on node "crc" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.339728 4985 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.339773 4985 reconciler_common.go:293] "Volume detached for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.345935 4985 scope.go:117] "RemoveContainer" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.347050 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": container with ID starting with ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d not found: ID does not exist" containerID="ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347097 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d"} err="failed to get container status \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": rpc error: code = NotFound desc = could not find container \"ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d\": container with ID starting with ff20ac5f2033f56c2dd6bc48cbc5842dc5ea4c6b69973da546211ddf97b5932d not found: ID does not exist" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347123 4985 scope.go:117] "RemoveContainer" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.347414 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": container with ID starting with 51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517 not found: ID does not exist" containerID="51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.347443 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517"} err="failed to get container status \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": rpc error: code = NotFound desc = could not find container \"51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517\": container with ID starting with 51a03d465bb89e7c069b1d618327b81d456bc2090cbce7eb2f810aaca9a6e517 not found: ID does not exist" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.352094 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" (UID: "8a4c48be-3f2f-4c2d-a0ba-2084caf7c541"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.441599 4985 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.482993 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.504271 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.517693 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518344 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="setup-container" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518367 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="setup-container" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518396 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518405 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: E0128 18:45:10.518422 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518430 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518724 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="62198283-1005-48a7-91a7-44d4240224ef" containerName="collect-profiles" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.518757 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.520291 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.534623 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656066 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656229 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656327 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656353 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656438 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656473 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656493 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656547 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656565 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.656579 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758387 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758429 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758471 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758500 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758561 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758589 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758609 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758649 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758666 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.758681 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.759352 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.759574 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.760054 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.760558 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-server-conf\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761045 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-config-data\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761242 4985 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.761303 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/3c775c7dad0eb68939020e6ac39de7a8b8505e50517c4739aca512474a1c5503/globalmount\"" pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.764729 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.764915 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-pod-info\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.770592 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.772986 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.777026 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4wkn\" (UniqueName: \"kubernetes.io/projected/dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe-kube-api-access-w4wkn\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.835609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e66ffe7e-8f1d-424d-9b5a-284a542a7e03\") pod \"rabbitmq-server-0\" (UID: \"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe\") " pod="openstack/rabbitmq-server-0" Jan 28 18:45:10 crc kubenswrapper[4985]: I0128 18:45:10.856508 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 18:45:11 crc kubenswrapper[4985]: I0128 18:45:11.279806 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" path="/var/lib/kubelet/pods/8a4c48be-3f2f-4c2d-a0ba-2084caf7c541/volumes" Jan 28 18:45:11 crc kubenswrapper[4985]: I0128 18:45:11.374037 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 18:45:12 crc kubenswrapper[4985]: I0128 18:45:12.164162 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"0b8fe0b05d817e6602bab1697f2117e1cc7cb2712aee0c798c6e6d8d4c1ecee2"} Jan 28 18:45:13 crc kubenswrapper[4985]: I0128 18:45:13.187158 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab"} Jan 28 18:45:14 crc kubenswrapper[4985]: I0128 18:45:14.835623 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="8a4c48be-3f2f-4c2d-a0ba-2084caf7c541" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.129:5671: i/o timeout" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.386589 4985 scope.go:117] "RemoveContainer" containerID="a38360ca0387e0827a57f03126984e0a20e5b118f82925b6ad3b02f72f4d6f3b" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.415606 4985 scope.go:117] "RemoveContainer" containerID="ebfc9ea99db013235f5adee2c18ba99af05a9f8dc3abaf0616d7d804e0cb54cc" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.445387 4985 scope.go:117] "RemoveContainer" containerID="6264c75e309967c9f20db46eab077cb1a5ee5f417ccd8f79e08cda266fd4cda5" Jan 28 18:45:37 crc kubenswrapper[4985]: I0128 18:45:37.535994 4985 scope.go:117] "RemoveContainer" containerID="2588192f60378ca1092182e85a2d142272639f43f1993cca86706ccb45ce9080" Jan 28 18:45:45 crc kubenswrapper[4985]: I0128 18:45:45.942520 4985 generic.go:334] "Generic (PLEG): container finished" podID="dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe" containerID="16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab" exitCode=0 Jan 28 18:45:45 crc kubenswrapper[4985]: I0128 18:45:45.942605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerDied","Data":"16f535ef854b9c0ece73b0832601c36f1589afcd2ce2c474cd161032d681a6ab"} Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.955355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe","Type":"ContainerStarted","Data":"c8fc90583f6fc69d68acdaee4058c687323d207d0e51813f6f54b16440681da2"} Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.956380 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 18:45:46 crc kubenswrapper[4985]: I0128 18:45:46.991009 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=36.990991419 podStartE2EDuration="36.990991419s" podCreationTimestamp="2026-01-28 18:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 18:45:46.975576143 +0000 UTC m=+1957.802138964" watchObservedRunningTime="2026-01-28 18:45:46.990991419 +0000 UTC m=+1957.817554240" Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.047576 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.060331 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-ksczb"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.074763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.085765 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-1abf-account-create-update-fwwhm"] Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.285352 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9900c5fe-8fec-452e-86cc-98d901c94329" path="/var/lib/kubelet/pods/9900c5fe-8fec-452e-86cc-98d901c94329/volumes" Jan 28 18:45:57 crc kubenswrapper[4985]: I0128 18:45:57.288242 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6004532-b8ab-4b69-9907-e7bd26c6735a" path="/var/lib/kubelet/pods/e6004532-b8ab-4b69-9907-e7bd26c6735a/volumes" Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.037736 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.052453 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.068794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.082518 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.095928 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-9qd5p"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.107996 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-3e6a-account-create-update-ktg62"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.119491 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-7fd1-account-create-update-tlhk7"] Jan 28 18:45:58 crc kubenswrapper[4985]: I0128 18:45:58.130766 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-z2jgs"] Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.305534 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a24a5c2-4c45-43dd-a957-253323fed4d5" path="/var/lib/kubelet/pods/1a24a5c2-4c45-43dd-a957-253323fed4d5/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.306955 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="346cb311-0387-4c85-9827-e0091b1e6bcd" path="/var/lib/kubelet/pods/346cb311-0387-4c85-9827-e0091b1e6bcd/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.308855 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4adf60c6-4008-4f41-a60b-cf10db1657cf" path="/var/lib/kubelet/pods/4adf60c6-4008-4f41-a60b-cf10db1657cf/volumes" Jan 28 18:45:59 crc kubenswrapper[4985]: I0128 18:45:59.309957 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2755f3-fac4-4f0b-9afb-a449f1587d11" path="/var/lib/kubelet/pods/8c2755f3-fac4-4f0b-9afb-a449f1587d11/volumes" Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.034014 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.050458 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.061415 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-db-create-kwqd8"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.075160 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-53b2-account-create-update-qhkg4"] Jan 28 18:46:00 crc kubenswrapper[4985]: I0128 18:46:00.859644 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 18:46:01 crc kubenswrapper[4985]: I0128 18:46:01.287918 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9193a306-03fe-41ae-8b93-2851b08c73fb" path="/var/lib/kubelet/pods/9193a306-03fe-41ae-8b93-2851b08c73fb/volumes" Jan 28 18:46:01 crc kubenswrapper[4985]: I0128 18:46:01.288771 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbefdfab-0ef2-4f71-9e9c-412c4dd87886" path="/var/lib/kubelet/pods/dbefdfab-0ef2-4f71-9e9c-412c4dd87886/volumes" Jan 28 18:46:04 crc kubenswrapper[4985]: I0128 18:46:04.038981 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:46:04 crc kubenswrapper[4985]: I0128 18:46:04.059553 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-9sg6w"] Jan 28 18:46:05 crc kubenswrapper[4985]: I0128 18:46:05.278793 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdbd403f-b5d7-4aba-9ee6-bcbbd933b212" path="/var/lib/kubelet/pods/cdbd403f-b5d7-4aba-9ee6-bcbbd933b212/volumes" Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.038921 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.051818 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.064082 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-ba0b-account-create-update-56qr8"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.074721 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/mysqld-exporter-openstack-cell1-db-create-fvvh2"] Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.277870 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53f6fb79-54ff-4a24-ad53-5812b6faa504" path="/var/lib/kubelet/pods/53f6fb79-54ff-4a24-ad53-5812b6faa504/volumes" Jan 28 18:46:09 crc kubenswrapper[4985]: I0128 18:46:09.278594 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c57cd6d-54d8-4d7c-863c-cfd30fab0768" path="/var/lib/kubelet/pods/8c57cd6d-54d8-4d7c-863c-cfd30fab0768/volumes" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.747064 4985 scope.go:117] "RemoveContainer" containerID="521672f13c59cc25ffac94ddae42298d333bbe43930229a9ebba2d7ae20a8b6d" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.781519 4985 scope.go:117] "RemoveContainer" containerID="3060e8923564aa30fd03bf66b3d5bcff3578ea99d0b7eb76a560b9022326b58d" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.857667 4985 scope.go:117] "RemoveContainer" containerID="448c9182ae2c3757a2a9e99f29042394c97a623fe1975f8bf4c1b669c7542ca8" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.907824 4985 scope.go:117] "RemoveContainer" containerID="a5fdb593967057491cb666085c46aac8c70a1408fffafe7d2ec91a2157ba041a" Jan 28 18:46:37 crc kubenswrapper[4985]: I0128 18:46:37.971597 4985 scope.go:117] "RemoveContainer" containerID="cecab7e544d7d4e5d190c44116d919bb9260ba70670cc5c4245efeb8c2adb050" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.049984 4985 scope.go:117] "RemoveContainer" containerID="609eafe7485b15327ad2db6af8fea1da5eeeb224da5b54e1005034d41800fc19" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.129624 4985 scope.go:117] "RemoveContainer" containerID="7b723368d435c52066b70f7b63bb7ce17848129ed979021f777f40ce02cde0ea" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.153758 4985 scope.go:117] "RemoveContainer" containerID="b2b6ff931f4d8121ddd40be80d57520170cc490944b52533c2717e3ed1e070dd" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.186353 4985 scope.go:117] "RemoveContainer" containerID="dac80678a434994386297bfe622d70833a87d9d21510a5da7f0de00c71f32e28" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.243521 4985 scope.go:117] "RemoveContainer" containerID="b5b1a4710b8858945982e3f5911ca4fd86e8a7dae739eb3659e4c396927b6955" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.313977 4985 scope.go:117] "RemoveContainer" containerID="6c205ff1c9724512d656b6452f88a456eabb29c117c2d744ca2a5dce502105d6" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.367242 4985 scope.go:117] "RemoveContainer" containerID="4fa8b90db22baa4c4faa4968579997174ae718c0a3c0ae7654d27d51dc441aa9" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.448664 4985 scope.go:117] "RemoveContainer" containerID="1f111c090d549d68eb9c893a3868b82edfed972f352a2924277825559a933734" Jan 28 18:46:38 crc kubenswrapper[4985]: I0128 18:46:38.480029 4985 scope.go:117] "RemoveContainer" containerID="156d97e63d4214e7b4ebce332bf5ca2efd74529bc9a0eb50a6b04fcfb1f0fcab" Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.059630 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.075987 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.088124 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.103660 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.115228 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.128734 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.144645 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.157346 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4fswm"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.168579 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.178790 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-2615-account-create-update-8xhkc"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.189932 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-888tv"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.200785 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-2623-account-create-update-nvftp"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.213286 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8d89-account-create-update-8fw8c"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.225987 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-br7rn"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.237436 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-4d8b-account-create-update-hg9ms"] Jan 28 18:46:40 crc kubenswrapper[4985]: I0128 18:46:40.251376 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-create-5stnz"] Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.282135 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a7822ab-0225-4deb-a283-374e32bc995f" path="/var/lib/kubelet/pods/0a7822ab-0225-4deb-a283-374e32bc995f/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.287733 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fc487cd-a539-4daa-8c13-40d0cea82770" path="/var/lib/kubelet/pods/0fc487cd-a539-4daa-8c13-40d0cea82770/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.291838 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd289b0-2807-4b7e-bdc0-300fe0ce09b2" path="/var/lib/kubelet/pods/3bd289b0-2807-4b7e-bdc0-300fe0ce09b2/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.295137 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d078ca4-34dd-4a65-a2e4-ffc23f098285" path="/var/lib/kubelet/pods/6d078ca4-34dd-4a65-a2e4-ffc23f098285/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.311177 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="768c2a33-259c-4194-ad30-8edffff92f18" path="/var/lib/kubelet/pods/768c2a33-259c-4194-ad30-8edffff92f18/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.316763 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="887f886a-9541-4075-9d32-0d8feaf32722" path="/var/lib/kubelet/pods/887f886a-9541-4075-9d32-0d8feaf32722/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.319124 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c052fbc1-a102-456b-8658-c954fe91534b" path="/var/lib/kubelet/pods/c052fbc1-a102-456b-8658-c954fe91534b/volumes" Jan 28 18:46:41 crc kubenswrapper[4985]: I0128 18:46:41.320596 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7074267-6514-4b90-9aef-a4df05b52054" path="/var/lib/kubelet/pods/d7074267-6514-4b90-9aef-a4df05b52054/volumes" Jan 28 18:46:42 crc kubenswrapper[4985]: I0128 18:46:42.038138 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:46:42 crc kubenswrapper[4985]: I0128 18:46:42.051904 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-5q5qm"] Jan 28 18:46:43 crc kubenswrapper[4985]: I0128 18:46:43.286214 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="229b9159-df89-4859-b5f3-d34b2759d0fd" path="/var/lib/kubelet/pods/229b9159-df89-4859-b5f3-d34b2759d0fd/volumes" Jan 28 18:46:46 crc kubenswrapper[4985]: I0128 18:46:46.028969 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:46:46 crc kubenswrapper[4985]: I0128 18:46:46.042405 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-49fs2"] Jan 28 18:46:47 crc kubenswrapper[4985]: I0128 18:46:47.284571 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c3b6ba3-2c25-4da1-b02f-de0e776383c1" path="/var/lib/kubelet/pods/6c3b6ba3-2c25-4da1-b02f-de0e776383c1/volumes" Jan 28 18:47:11 crc kubenswrapper[4985]: I0128 18:47:11.185999 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:47:11 crc kubenswrapper[4985]: I0128 18:47:11.186653 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.134413 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.138021 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.145376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.309778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.310125 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.310222 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.412982 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413047 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413235 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413603 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.413640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.456295 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"certified-operators-zjwln\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:16 crc kubenswrapper[4985]: I0128 18:47:16.463196 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:17 crc kubenswrapper[4985]: I0128 18:47:17.130474 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.082987 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" exitCode=0 Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.083372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748"} Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.083407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"b5edb2b86f696acde21c697dd591a86e6bb2afd0a8cb27222ce7b1cd843ebb0e"} Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.087724 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.544450 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.548502 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.564899 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644569 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644638 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.644671 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747385 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747932 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.747980 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.769219 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"redhat-marketplace-qvjh4\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:18 crc kubenswrapper[4985]: I0128 18:47:18.882878 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.130650 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.133443 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.142827 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156820 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156872 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.156930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259075 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259136 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259182 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.259637 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.294472 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"redhat-operators-6l7vb\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.456944 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:19 crc kubenswrapper[4985]: I0128 18:47:19.825223 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:20 crc kubenswrapper[4985]: W0128 18:47:20.011700 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod13b350b8_ace5_45c9_9de3_0b4887795c48.slice/crio-04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962 WatchSource:0}: Error finding container 04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962: Status 404 returned error can't find the container with id 04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962 Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.012902 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.114311 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962"} Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.116326 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"fbb3b7576bc49a07a7ed4e1638eb87bdd32c1fd17054a063d0d281a60776ca08"} Jan 28 18:47:20 crc kubenswrapper[4985]: I0128 18:47:20.119659 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.131491 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" exitCode=0 Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.131558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51"} Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.135689 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" exitCode=0 Jan 28 18:47:21 crc kubenswrapper[4985]: I0128 18:47:21.135908 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.182629 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.185755 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" exitCode=0 Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.185831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} Jan 28 18:47:23 crc kubenswrapper[4985]: I0128 18:47:23.193910 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.237225 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerStarted","Data":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.273040 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zjwln" podStartSLOduration=3.470990271 podStartE2EDuration="10.273016429s" podCreationTimestamp="2026-01-28 18:47:16 +0000 UTC" firstStartedPulling="2026-01-28 18:47:18.085375754 +0000 UTC m=+2048.911938595" lastFinishedPulling="2026-01-28 18:47:24.887401932 +0000 UTC m=+2055.713964753" observedRunningTime="2026-01-28 18:47:26.257559701 +0000 UTC m=+2057.084122512" watchObservedRunningTime="2026-01-28 18:47:26.273016429 +0000 UTC m=+2057.099579260" Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.463452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:26 crc kubenswrapper[4985]: I0128 18:47:26.463590 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:27 crc kubenswrapper[4985]: I0128 18:47:27.518320 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-zjwln" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" probeResult="failure" output=< Jan 28 18:47:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:47:27 crc kubenswrapper[4985]: > Jan 28 18:47:28 crc kubenswrapper[4985]: I0128 18:47:28.260959 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" exitCode=0 Jan 28 18:47:28 crc kubenswrapper[4985]: I0128 18:47:28.261003 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} Jan 28 18:47:30 crc kubenswrapper[4985]: I0128 18:47:30.288707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerStarted","Data":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} Jan 28 18:47:30 crc kubenswrapper[4985]: I0128 18:47:30.315149 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qvjh4" podStartSLOduration=4.000411963 podStartE2EDuration="12.315127022s" podCreationTimestamp="2026-01-28 18:47:18 +0000 UTC" firstStartedPulling="2026-01-28 18:47:21.140882459 +0000 UTC m=+2051.967445300" lastFinishedPulling="2026-01-28 18:47:29.455597538 +0000 UTC m=+2060.282160359" observedRunningTime="2026-01-28 18:47:30.306099436 +0000 UTC m=+2061.132662257" watchObservedRunningTime="2026-01-28 18:47:30.315127022 +0000 UTC m=+2061.141689853" Jan 28 18:47:34 crc kubenswrapper[4985]: I0128 18:47:34.332146 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" exitCode=0 Jan 28 18:47:34 crc kubenswrapper[4985]: I0128 18:47:34.332226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} Jan 28 18:47:35 crc kubenswrapper[4985]: I0128 18:47:35.354273 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerStarted","Data":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} Jan 28 18:47:35 crc kubenswrapper[4985]: I0128 18:47:35.386796 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6l7vb" podStartSLOduration=2.669651987 podStartE2EDuration="16.38677753s" podCreationTimestamp="2026-01-28 18:47:19 +0000 UTC" firstStartedPulling="2026-01-28 18:47:21.133824329 +0000 UTC m=+2051.960387150" lastFinishedPulling="2026-01-28 18:47:34.850949872 +0000 UTC m=+2065.677512693" observedRunningTime="2026-01-28 18:47:35.373990938 +0000 UTC m=+2066.200553779" watchObservedRunningTime="2026-01-28 18:47:35.38677753 +0000 UTC m=+2066.213340351" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.367430 4985 generic.go:334] "Generic (PLEG): container finished" podID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerID="bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef" exitCode=0 Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.367498 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerDied","Data":"bc9afc05871aa23d4c3db1d4e88d2efe8c3615724cb67da049ef34770cd610ef"} Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.520796 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.590371 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:36 crc kubenswrapper[4985]: I0128 18:47:36.761155 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:37 crc kubenswrapper[4985]: I0128 18:47:37.942572 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050597 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050657 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.050712 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.051644 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") pod \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\" (UID: \"3865f1db-f707-4b28-bbf2-8ce1975baa1f\") " Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.065342 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.069149 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll" (OuterVolumeSpecName: "kube-api-access-4r6ll") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "kube-api-access-4r6ll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.089025 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.089967 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory" (OuterVolumeSpecName: "inventory") pod "3865f1db-f707-4b28-bbf2-8ce1975baa1f" (UID: "3865f1db-f707-4b28-bbf2-8ce1975baa1f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155035 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155075 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155088 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4r6ll\" (UniqueName: \"kubernetes.io/projected/3865f1db-f707-4b28-bbf2-8ce1975baa1f-kube-api-access-4r6ll\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.155107 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3865f1db-f707-4b28-bbf2-8ce1975baa1f-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" event={"ID":"3865f1db-f707-4b28-bbf2-8ce1975baa1f","Type":"ContainerDied","Data":"1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b"} Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390978 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfb1cd976d4fbd706984e82e00454ee0234df3e9f729b27a0e1988a842cf66b" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390299 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zjwln" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" containerID="cri-o://01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" gracePeriod=2 Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.390061 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.490239 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:38 crc kubenswrapper[4985]: E0128 18:47:38.490777 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.490800 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.491111 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3865f1db-f707-4b28-bbf2-8ce1975baa1f" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.492189 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495365 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495548 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.495463 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.496108 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.511507 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666550 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666744 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.666800 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.769636 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.770156 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.770226 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.775793 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.775821 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.788609 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-42d8l\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.826133 4985 scope.go:117] "RemoveContainer" containerID="8d83ae610aea076db41903e479372673c489635bc359f8ba503ad92865568b4d" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.881702 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.883090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.883194 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:38 crc kubenswrapper[4985]: I0128 18:47:38.953893 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.050559 4985 scope.go:117] "RemoveContainer" containerID="6f81b27fc2e7a5ce52780bd694a1d7b0af6de17e38f2a816f35448cc2f8e93b0" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.078891 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.105433 4985 scope.go:117] "RemoveContainer" containerID="82ff15708c7feba4b50bfae36f824c144bddeb2ec8ddc05a588aede4034d1eb1" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.163002 4985 scope.go:117] "RemoveContainer" containerID="92ba33b439db2a5df5ff34914eff515d7a059caada35a79afe448a92f1201c1e" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180069 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180236 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.180298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") pod \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\" (UID: \"4ccb0c01-9886-4215-b63d-a0fdcc81a25c\") " Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.181647 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities" (OuterVolumeSpecName: "utilities") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.182491 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.186660 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq" (OuterVolumeSpecName: "kube-api-access-cthvq") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "kube-api-access-cthvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.207820 4985 scope.go:117] "RemoveContainer" containerID="ef6310844d9eb58852520a7287dfca2d3780f36ea565d58fea9a7e00a7b9506b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.238537 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ccb0c01-9886-4215-b63d-a0fdcc81a25c" (UID: "4ccb0c01-9886-4215-b63d-a0fdcc81a25c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.257991 4985 scope.go:117] "RemoveContainer" containerID="f7f9efcfdd23e8d8635c4c036c55b162db6c57b666261780d55e532d672c4438" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.284941 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.284979 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cthvq\" (UniqueName: \"kubernetes.io/projected/4ccb0c01-9886-4215-b63d-a0fdcc81a25c-kube-api-access-cthvq\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.287645 4985 scope.go:117] "RemoveContainer" containerID="62b40fcabf6fa0fa3594d971ef20837ab76d19a05ef888b27ef59e8e216c6b43" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.315909 4985 scope.go:117] "RemoveContainer" containerID="0ab08bac76909d1e142ea94f2076118980c9731dca96c80e8289000d98f0d6ce" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.344425 4985 scope.go:117] "RemoveContainer" containerID="fc0b5d4f8a27e5da50b50ceabdadd101d74be078c6014be172f85e01027bd9af" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.385827 4985 scope.go:117] "RemoveContainer" containerID="d394f63865046e3bed1c13acb76b2d5b90327e2b0f8a9073a210a53855ab1204" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438793 4985 generic.go:334] "Generic (PLEG): container finished" podID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" exitCode=0 Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438875 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438921 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zjwln" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438936 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zjwln" event={"ID":"4ccb0c01-9886-4215-b63d-a0fdcc81a25c","Type":"ContainerDied","Data":"b5edb2b86f696acde21c697dd591a86e6bb2afd0a8cb27222ce7b1cd843ebb0e"} Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.438956 4985 scope.go:117] "RemoveContainer" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.459617 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.459659 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.481484 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.493547 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zjwln"] Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.517895 4985 scope.go:117] "RemoveContainer" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.524959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.530148 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l"] Jan 28 18:47:39 crc kubenswrapper[4985]: W0128 18:47:39.538697 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfbfc48e7_8a35_4fc6_b9fd_0c1735864116.slice/crio-3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56 WatchSource:0}: Error finding container 3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56: Status 404 returned error can't find the container with id 3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56 Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.578346 4985 scope.go:117] "RemoveContainer" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.623810 4985 scope.go:117] "RemoveContainer" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624310 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": container with ID starting with 01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18 not found: ID does not exist" containerID="01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624347 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18"} err="failed to get container status \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": rpc error: code = NotFound desc = could not find container \"01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18\": container with ID starting with 01fbd7c17753a46a3b80c1d29341e919ea6a544cec16865c935c005fcc908e18 not found: ID does not exist" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624369 4985 scope.go:117] "RemoveContainer" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624684 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": container with ID starting with 1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b not found: ID does not exist" containerID="1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624720 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b"} err="failed to get container status \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": rpc error: code = NotFound desc = could not find container \"1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b\": container with ID starting with 1aeb1754517fc81f5f048e4d33620f1eeb78b44dacbe90475527fe87021d343b not found: ID does not exist" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624744 4985 scope.go:117] "RemoveContainer" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: E0128 18:47:39.624951 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": container with ID starting with 8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748 not found: ID does not exist" containerID="8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748" Jan 28 18:47:39 crc kubenswrapper[4985]: I0128 18:47:39.624980 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748"} err="failed to get container status \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": rpc error: code = NotFound desc = could not find container \"8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748\": container with ID starting with 8460d9c93a8ad3bd1b16d78514b5cad63afc17dd4195ee4983a2e0145d985748 not found: ID does not exist" Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.459010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerStarted","Data":"24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc"} Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.459293 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerStarted","Data":"3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56"} Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.487615 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" podStartSLOduration=2.020700396 podStartE2EDuration="2.487594124s" podCreationTimestamp="2026-01-28 18:47:38 +0000 UTC" firstStartedPulling="2026-01-28 18:47:39.547184521 +0000 UTC m=+2070.373747342" lastFinishedPulling="2026-01-28 18:47:40.014078249 +0000 UTC m=+2070.840641070" observedRunningTime="2026-01-28 18:47:40.477124107 +0000 UTC m=+2071.303686928" watchObservedRunningTime="2026-01-28 18:47:40.487594124 +0000 UTC m=+2071.314156945" Jan 28 18:47:40 crc kubenswrapper[4985]: I0128 18:47:40.510448 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6l7vb" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" probeResult="failure" output=< Jan 28 18:47:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:47:40 crc kubenswrapper[4985]: > Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.185730 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.186140 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.277439 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" path="/var/lib/kubelet/pods/4ccb0c01-9886-4215-b63d-a0fdcc81a25c/volumes" Jan 28 18:47:41 crc kubenswrapper[4985]: I0128 18:47:41.763794 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:42 crc kubenswrapper[4985]: I0128 18:47:42.482636 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qvjh4" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" containerID="cri-o://7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" gracePeriod=2 Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.066859 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187151 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187380 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.187481 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") pod \"a647567b-b5d7-4001-aeb7-085793d361ae\" (UID: \"a647567b-b5d7-4001-aeb7-085793d361ae\") " Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.189019 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities" (OuterVolumeSpecName: "utilities") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.193697 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg" (OuterVolumeSpecName: "kube-api-access-j7gsg") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "kube-api-access-j7gsg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.211352 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a647567b-b5d7-4001-aeb7-085793d361ae" (UID: "a647567b-b5d7-4001-aeb7-085793d361ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291000 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291031 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a647567b-b5d7-4001-aeb7-085793d361ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.291044 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7gsg\" (UniqueName: \"kubernetes.io/projected/a647567b-b5d7-4001-aeb7-085793d361ae-kube-api-access-j7gsg\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495302 4985 generic.go:334] "Generic (PLEG): container finished" podID="a647567b-b5d7-4001-aeb7-085793d361ae" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" exitCode=0 Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495372 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495637 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qvjh4" event={"ID":"a647567b-b5d7-4001-aeb7-085793d361ae","Type":"ContainerDied","Data":"fbb3b7576bc49a07a7ed4e1638eb87bdd32c1fd17054a063d0d281a60776ca08"} Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495661 4985 scope.go:117] "RemoveContainer" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.495405 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qvjh4" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.519954 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.531921 4985 scope.go:117] "RemoveContainer" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.534562 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qvjh4"] Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.552770 4985 scope.go:117] "RemoveContainer" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.623597 4985 scope.go:117] "RemoveContainer" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.624092 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": container with ID starting with 7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06 not found: ID does not exist" containerID="7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.624143 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06"} err="failed to get container status \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": rpc error: code = NotFound desc = could not find container \"7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06\": container with ID starting with 7531a7df89056bf90261e352890b81652f617f7ed0d7f527563ddc46b00b9a06 not found: ID does not exist" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.624172 4985 scope.go:117] "RemoveContainer" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.625296 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": container with ID starting with 77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656 not found: ID does not exist" containerID="77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656"} err="failed to get container status \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": rpc error: code = NotFound desc = could not find container \"77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656\": container with ID starting with 77d90df65ca6e57b5b5ce6b9065b5b8a68ab383f3922b15ddd9c88d379708656 not found: ID does not exist" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625350 4985 scope.go:117] "RemoveContainer" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: E0128 18:47:43.625646 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": container with ID starting with 9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2 not found: ID does not exist" containerID="9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2" Jan 28 18:47:43 crc kubenswrapper[4985]: I0128 18:47:43.625671 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2"} err="failed to get container status \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": rpc error: code = NotFound desc = could not find container \"9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2\": container with ID starting with 9499e24337dbc0a11ef6181dcaa8e1179e8d9bc0c18832fa38345d689f0869a2 not found: ID does not exist" Jan 28 18:47:45 crc kubenswrapper[4985]: I0128 18:47:45.278115 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" path="/var/lib/kubelet/pods/a647567b-b5d7-4001-aeb7-085793d361ae/volumes" Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.061011 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.112801 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.131845 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-hlgnm"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.148228 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-8h4kr"] Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.280075 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a3199c2-6b1c-4a07-849d-cc92d372c5c3" path="/var/lib/kubelet/pods/4a3199c2-6b1c-4a07-849d-cc92d372c5c3/volumes" Jan 28 18:47:47 crc kubenswrapper[4985]: I0128 18:47:47.283652 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f788adab-3912-43da-869e-2450d65b761f" path="/var/lib/kubelet/pods/f788adab-3912-43da-869e-2450d65b761f/volumes" Jan 28 18:47:48 crc kubenswrapper[4985]: I0128 18:47:48.029764 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:47:48 crc kubenswrapper[4985]: I0128 18:47:48.043727 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9w9wm"] Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.277701 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ba5eedf-14b8-45ce-b738-e41a6daff299" path="/var/lib/kubelet/pods/2ba5eedf-14b8-45ce-b738-e41a6daff299/volumes" Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.518781 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:49 crc kubenswrapper[4985]: I0128 18:47:49.587065 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:50 crc kubenswrapper[4985]: I0128 18:47:50.731417 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:50 crc kubenswrapper[4985]: I0128 18:47:50.731973 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6l7vb" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" containerID="cri-o://de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" gracePeriod=2 Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.337525 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504440 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504585 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.504717 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") pod \"13b350b8-ace5-45c9-9de3-0b4887795c48\" (UID: \"13b350b8-ace5-45c9-9de3-0b4887795c48\") " Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.505648 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities" (OuterVolumeSpecName: "utilities") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.518520 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5" (OuterVolumeSpecName: "kube-api-access-s5qc5") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "kube-api-access-s5qc5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.582904 4985 generic.go:334] "Generic (PLEG): container finished" podID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" exitCode=0 Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.582958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583013 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6l7vb" event={"ID":"13b350b8-ace5-45c9-9de3-0b4887795c48","Type":"ContainerDied","Data":"04c8c4cd2d28ac7bc4fefedc58c109823619aa72f9c17124c23d39096091e962"} Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583053 4985 scope.go:117] "RemoveContainer" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.583075 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6l7vb" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.608226 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s5qc5\" (UniqueName: \"kubernetes.io/projected/13b350b8-ace5-45c9-9de3-0b4887795c48-kube-api-access-s5qc5\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.608291 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.617633 4985 scope.go:117] "RemoveContainer" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.640206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13b350b8-ace5-45c9-9de3-0b4887795c48" (UID: "13b350b8-ace5-45c9-9de3-0b4887795c48"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.687630 4985 scope.go:117] "RemoveContainer" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.712215 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13b350b8-ace5-45c9-9de3-0b4887795c48-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.721545 4985 scope.go:117] "RemoveContainer" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.722384 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": container with ID starting with de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae not found: ID does not exist" containerID="de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.722456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae"} err="failed to get container status \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": rpc error: code = NotFound desc = could not find container \"de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae\": container with ID starting with de64e42e803089ee3523c8ca1a909e7cb446d42abef4bd77839fe945a8303eae not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.722499 4985 scope.go:117] "RemoveContainer" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.723126 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": container with ID starting with 91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5 not found: ID does not exist" containerID="91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723185 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5"} err="failed to get container status \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": rpc error: code = NotFound desc = could not find container \"91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5\": container with ID starting with 91c401dffd3b03804a65374e20c66860f2bf0912625b75d147fdb7125522e3d5 not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723225 4985 scope.go:117] "RemoveContainer" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: E0128 18:47:51.723935 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": container with ID starting with 8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51 not found: ID does not exist" containerID="8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.723971 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51"} err="failed to get container status \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": rpc error: code = NotFound desc = could not find container \"8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51\": container with ID starting with 8f66e09e7eb1d406f3637607c61f0b8e33d961463c0a13b148dff2b276bbad51 not found: ID does not exist" Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.941168 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:51 crc kubenswrapper[4985]: I0128 18:47:51.953176 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6l7vb"] Jan 28 18:47:53 crc kubenswrapper[4985]: I0128 18:47:53.276199 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" path="/var/lib/kubelet/pods/13b350b8-ace5-45c9-9de3-0b4887795c48/volumes" Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.087383 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.103688 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.118296 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s8hs9"] Jan 28 18:48:08 crc kubenswrapper[4985]: I0128 18:48:08.131281 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-dwwcb"] Jan 28 18:48:09 crc kubenswrapper[4985]: I0128 18:48:09.278524 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b64f0d6c-55b7-4eac-85f6-e78b581cbebc" path="/var/lib/kubelet/pods/b64f0d6c-55b7-4eac-85f6-e78b581cbebc/volumes" Jan 28 18:48:09 crc kubenswrapper[4985]: I0128 18:48:09.279812 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="feecd29d-1d64-47f4-a1af-e634b7d87f3a" path="/var/lib/kubelet/pods/feecd29d-1d64-47f4-a1af-e634b7d87f3a/volumes" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.185725 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.186085 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.186136 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.187112 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.187174 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" gracePeriod=600 Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802043 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" exitCode=0 Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802379 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87"} Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802407 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} Jan 28 18:48:11 crc kubenswrapper[4985]: I0128 18:48:11.802427 4985 scope.go:117] "RemoveContainer" containerID="ff018c694429b7e2f2f66f3289eff8688e4072cd5ed675b74128bd4b55d8e108" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.732789 4985 scope.go:117] "RemoveContainer" containerID="badce37bfe68dc4bcc676f7b0c786e9f03574bc7e99b889419d42e1d88e90514" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.784335 4985 scope.go:117] "RemoveContainer" containerID="bf3748442896f3bbadb859f2d03e272740c521c498e8208b7d4bed6a247a0dd0" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.867810 4985 scope.go:117] "RemoveContainer" containerID="461350d6795ff69f1fd203af637d4dd96dfc2a84c72f138630ab057e524c2df1" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.903478 4985 scope.go:117] "RemoveContainer" containerID="38e38c87534fe5e2e6e7da069589b30c70844285bffd29f51db0ab1e32c6ef5c" Jan 28 18:48:39 crc kubenswrapper[4985]: I0128 18:48:39.979343 4985 scope.go:117] "RemoveContainer" containerID="ff21852bdb082ecfb847ad06c015a8a45e3369552ad08ad1a4b52a4cb479bc06" Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.047131 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.058863 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.069372 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mzbqq"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.078485 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-tq8xx"] Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.299775 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52f84c63-5719-4c32-bbc7-d7960fe35d35" path="/var/lib/kubelet/pods/52f84c63-5719-4c32-bbc7-d7960fe35d35/volumes" Jan 28 18:49:11 crc kubenswrapper[4985]: I0128 18:49:11.335113 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc09e699-e5ce-4e02-b3ae-ce43d120e70d" path="/var/lib/kubelet/pods/dc09e699-e5ce-4e02-b3ae-ce43d120e70d/volumes" Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.048219 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.064675 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.078285 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.089044 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.100623 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-f01b-account-create-update-b985r"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.112950 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-b80b-account-create-update-mrvzq"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.123982 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7b9a-account-create-update-l47bt"] Jan 28 18:49:12 crc kubenswrapper[4985]: I0128 18:49:12.140615 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-jqvzw"] Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.278728 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae" path="/var/lib/kubelet/pods/253122d8-4dd9-4f48-bbd0-f6b7bb1bf0ae/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.281153 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75ac3925-bebe-4c63-999f-073386005723" path="/var/lib/kubelet/pods/75ac3925-bebe-4c63-999f-073386005723/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.282061 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4efe2ca-1bc9-40db-944e-fb86222e4f98" path="/var/lib/kubelet/pods/b4efe2ca-1bc9-40db-944e-fb86222e4f98/volumes" Jan 28 18:49:13 crc kubenswrapper[4985]: I0128 18:49:13.282893 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc08dbb5-2423-4fe9-8c21-a668459cad74" path="/var/lib/kubelet/pods/dc08dbb5-2423-4fe9-8c21-a668459cad74/volumes" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.249329 4985 scope.go:117] "RemoveContainer" containerID="4bc3d7f5e4e6dada67f4a141ee7828a9a6e0f2e232ee13a2c55fb56665c8dcf7" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.278586 4985 scope.go:117] "RemoveContainer" containerID="c45d2c9f516bceabb6c91c348f68e974205ef1034563c42f6346b513ae9f2b4e" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.349673 4985 scope.go:117] "RemoveContainer" containerID="d941727c28e1382267609d1ceda76e73a9f2d9cd3d596bc04e5cda672a1166cb" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.423200 4985 scope.go:117] "RemoveContainer" containerID="93175a518881e892d15535448d5c38da897596006be51be39132a6908ffae666" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.534129 4985 scope.go:117] "RemoveContainer" containerID="c2b4778aba3ad4aab0ffc010a57b2670dae7de8ea4b986e78468cc76f9181467" Jan 28 18:49:40 crc kubenswrapper[4985]: I0128 18:49:40.600627 4985 scope.go:117] "RemoveContainer" containerID="6a970a7bb0cf6a6924c094b8183cf37c24dab48878e09e30bf62063b33da4241" Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.044097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.055524 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-wnljz"] Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.277066 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df5e9657-f657-4f0e-9d46-31c6942e70d2" path="/var/lib/kubelet/pods/df5e9657-f657-4f0e-9d46-31c6942e70d2/volumes" Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.958445 4985 generic.go:334] "Generic (PLEG): container finished" podID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerID="24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc" exitCode=0 Jan 28 18:49:51 crc kubenswrapper[4985]: I0128 18:49:51.958537 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerDied","Data":"24ae801d110a2ccea339ddd0d6272cdb220439bc5457fb577978b735b741f7fc"} Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.540052 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691669 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.691767 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") pod \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\" (UID: \"fbfc48e7-8a35-4fc6-b9fd-0c1735864116\") " Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.697413 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt" (OuterVolumeSpecName: "kube-api-access-zzgkt") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "kube-api-access-zzgkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.730157 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory" (OuterVolumeSpecName: "inventory") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.734907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "fbfc48e7-8a35-4fc6-b9fd-0c1735864116" (UID: "fbfc48e7-8a35-4fc6-b9fd-0c1735864116"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794783 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794826 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.794836 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzgkt\" (UniqueName: \"kubernetes.io/projected/fbfc48e7-8a35-4fc6-b9fd-0c1735864116-kube-api-access-zzgkt\") on node \"crc\" DevicePath \"\"" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982472 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" event={"ID":"fbfc48e7-8a35-4fc6-b9fd-0c1735864116","Type":"ContainerDied","Data":"3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56"} Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982517 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3815895e125b2d993294d08b3a66a4e5ca54790173a42226945d76a4521c3e56" Jan 28 18:49:53 crc kubenswrapper[4985]: I0128 18:49:53.982551 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-42d8l" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.108718 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109341 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109359 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109388 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109397 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109409 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109417 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-content" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109432 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109441 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109457 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109465 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109479 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109486 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109502 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109512 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109530 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109538 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109555 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109560 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: E0128 18:49:54.109581 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109587 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="extract-utilities" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109872 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a647567b-b5d7-4001-aeb7-085793d361ae" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109911 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="13b350b8-ace5-45c9-9de3-0b4887795c48" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109932 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ccb0c01-9886-4215-b63d-a0fdcc81a25c" containerName="registry-server" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.109946 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="fbfc48e7-8a35-4fc6-b9fd-0c1735864116" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.111057 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115666 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115856 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.115874 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.120539 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.128717 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204461 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204563 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.204737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307419 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307618 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.307659 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.311604 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.314832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.324942 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.433730 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:49:54 crc kubenswrapper[4985]: I0128 18:49:54.992541 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn"] Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.006615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerStarted","Data":"c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6"} Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.007114 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerStarted","Data":"633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc"} Jan 28 18:49:56 crc kubenswrapper[4985]: I0128 18:49:56.024635 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" podStartSLOduration=1.5969992880000001 podStartE2EDuration="2.024604424s" podCreationTimestamp="2026-01-28 18:49:54 +0000 UTC" firstStartedPulling="2026-01-28 18:49:55.002633994 +0000 UTC m=+2205.829196815" lastFinishedPulling="2026-01-28 18:49:55.43023913 +0000 UTC m=+2206.256801951" observedRunningTime="2026-01-28 18:49:56.021138916 +0000 UTC m=+2206.847701737" watchObservedRunningTime="2026-01-28 18:49:56.024604424 +0000 UTC m=+2206.851167245" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.049265 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.060715 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.070852 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-682b-account-create-update-fphsf"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.080901 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-create-jdztq"] Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.185886 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.185971 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.284391 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21d5020b-3b33-4e6c-95dd-9aad46d3f0e5" path="/var/lib/kubelet/pods/21d5020b-3b33-4e6c-95dd-9aad46d3f0e5/volumes" Jan 28 18:50:11 crc kubenswrapper[4985]: I0128 18:50:11.288408 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2578b35-7408-46ed-bcee-8b0ff114cd33" path="/var/lib/kubelet/pods/c2578b35-7408-46ed-bcee-8b0ff114cd33/volumes" Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.041609 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.053840 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-m82mm"] Jan 28 18:50:15 crc kubenswrapper[4985]: I0128 18:50:15.280365 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14e43739-91f4-43c9-9b01-5f0574a3b150" path="/var/lib/kubelet/pods/14e43739-91f4-43c9-9b01-5f0574a3b150/volumes" Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.073338 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.088896 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-rxz6k"] Jan 28 18:50:21 crc kubenswrapper[4985]: I0128 18:50:21.279571 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc545ce7-58a7-4757-8eab-8b0a28570a49" path="/var/lib/kubelet/pods/dc545ce7-58a7-4757-8eab-8b0a28570a49/volumes" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.802310 4985 scope.go:117] "RemoveContainer" containerID="c83af2ab400014fc785ba01cb5de51bf84a3ea8da54f74af11e2f8a7b4d8bbce" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.841050 4985 scope.go:117] "RemoveContainer" containerID="5fa6b37534633df411a4bdc3fa77962a9df43667fb32532c9621de45df63d178" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.902564 4985 scope.go:117] "RemoveContainer" containerID="178c7940c1e7c85eaf00e787d93879f89e3e05e71f11cbc272b8188e9429d0c9" Jan 28 18:50:40 crc kubenswrapper[4985]: I0128 18:50:40.954838 4985 scope.go:117] "RemoveContainer" containerID="ea52163bdf8a3e8c42d7f0dbeffc6baafb9ed87c32e573d1569132ee3f06dfb6" Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.025547 4985 scope.go:117] "RemoveContainer" containerID="382f43a07ac5b420a95def886ddd1d4454cef25ffaca287fa20c580c3c9e42fc" Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.185837 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:50:41 crc kubenswrapper[4985]: I0128 18:50:41.185913 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.078975 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.102362 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-559zx"] Jan 28 18:51:07 crc kubenswrapper[4985]: I0128 18:51:07.279431 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aabefa44-123b-48ce-a38b-8c5f6ed32b73" path="/var/lib/kubelet/pods/aabefa44-123b-48ce-a38b-8c5f6ed32b73/volumes" Jan 28 18:51:09 crc kubenswrapper[4985]: I0128 18:51:09.815294 4985 generic.go:334] "Generic (PLEG): container finished" podID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerID="c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6" exitCode=0 Jan 28 18:51:09 crc kubenswrapper[4985]: I0128 18:51:09.815392 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerDied","Data":"c9f2f497bdfc010d8b6ae9a2d144192486869cb9cba3b65990bd74b61e389db6"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187037 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187626 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.187724 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.189527 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.189627 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" gracePeriod=600 Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.333794 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.530358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646343 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.646385 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") pod \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\" (UID: \"ed5a5127-7214-4f45-bda0-a1c6ecbaaede\") " Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.656288 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l" (OuterVolumeSpecName: "kube-api-access-l528l") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "kube-api-access-l528l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.684502 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory" (OuterVolumeSpecName: "inventory") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.684978 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ed5a5127-7214-4f45-bda0-a1c6ecbaaede" (UID: "ed5a5127-7214-4f45-bda0-a1c6ecbaaede"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749829 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749863 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.749873 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l528l\" (UniqueName: \"kubernetes.io/projected/ed5a5127-7214-4f45-bda0-a1c6ecbaaede-kube-api-access-l528l\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845781 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" exitCode=0 Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.845963 4985 scope.go:117] "RemoveContainer" containerID="b39401ce5f91585a2b4b22e75d0e797d75465500360ec9051ef07c933730fe87" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.846770 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.847065 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.856895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" event={"ID":"ed5a5127-7214-4f45-bda0-a1c6ecbaaede","Type":"ContainerDied","Data":"633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc"} Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.856943 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="633bd975811338a8dd128feac23d6ada0361cd583588d9bc8c1c9bc2d16bbffc" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.857064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.989702 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:11 crc kubenswrapper[4985]: E0128 18:51:11.990348 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.990367 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.990626 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ed5a5127-7214-4f45-bda0-a1c6ecbaaede" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.992904 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996023 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996234 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.996553 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:11 crc kubenswrapper[4985]: I0128 18:51:11.997966 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.016850 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.159396 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.160022 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.160199 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.262684 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.263078 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.263210 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.266974 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.267129 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.282053 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-5h28l\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.320163 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:12 crc kubenswrapper[4985]: I0128 18:51:12.874093 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l"] Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.878773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerStarted","Data":"98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca"} Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.879367 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerStarted","Data":"eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24"} Jan 28 18:51:13 crc kubenswrapper[4985]: I0128 18:51:13.899698 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" podStartSLOduration=2.4909154510000002 podStartE2EDuration="2.899664224s" podCreationTimestamp="2026-01-28 18:51:11 +0000 UTC" firstStartedPulling="2026-01-28 18:51:12.880857673 +0000 UTC m=+2283.707420494" lastFinishedPulling="2026-01-28 18:51:13.289606446 +0000 UTC m=+2284.116169267" observedRunningTime="2026-01-28 18:51:13.894191979 +0000 UTC m=+2284.720754820" watchObservedRunningTime="2026-01-28 18:51:13.899664224 +0000 UTC m=+2284.726227085" Jan 28 18:51:18 crc kubenswrapper[4985]: I0128 18:51:18.928951 4985 generic.go:334] "Generic (PLEG): container finished" podID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerID="98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca" exitCode=0 Jan 28 18:51:18 crc kubenswrapper[4985]: I0128 18:51:18.929103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerDied","Data":"98d7adb89708d071f297f54f218b65a95a49b9820984fb652f611d4d070a95ca"} Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.448784 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.573838 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.574010 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.574061 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") pod \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\" (UID: \"ae55970b-52a8-4bd7-8d82-853e9cd4ad32\") " Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.586206 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5" (OuterVolumeSpecName: "kube-api-access-2dtl5") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "kube-api-access-2dtl5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.610508 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory" (OuterVolumeSpecName: "inventory") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.627944 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "ae55970b-52a8-4bd7-8d82-853e9cd4ad32" (UID: "ae55970b-52a8-4bd7-8d82-853e9cd4ad32"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677487 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2dtl5\" (UniqueName: \"kubernetes.io/projected/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-kube-api-access-2dtl5\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677523 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.677533 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/ae55970b-52a8-4bd7-8d82-853e9cd4ad32-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950844 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" event={"ID":"ae55970b-52a8-4bd7-8d82-853e9cd4ad32","Type":"ContainerDied","Data":"eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24"} Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950889 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae23b0ff4b25c1d144fc5ec4fddcb5528ef6851dd78e1e85edddba6a291da24" Jan 28 18:51:20 crc kubenswrapper[4985]: I0128 18:51:20.950917 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-5h28l" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.033095 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: E0128 18:51:21.034473 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.034506 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.034921 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae55970b-52a8-4bd7-8d82-853e9cd4ad32" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.036066 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.040170 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.040705 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.041406 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.041428 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.051273 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.091505 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.092042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.092338 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194227 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.194472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.199817 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.204201 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.216386 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-25775\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.384893 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:21 crc kubenswrapper[4985]: I0128 18:51:21.956667 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775"] Jan 28 18:51:21 crc kubenswrapper[4985]: W0128 18:51:21.962622 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3baf8df5_1989_4678_8268_058f46511cfd.slice/crio-de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15 WatchSource:0}: Error finding container de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15: Status 404 returned error can't find the container with id de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15 Jan 28 18:51:22 crc kubenswrapper[4985]: I0128 18:51:22.984586 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerStarted","Data":"4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572"} Jan 28 18:51:22 crc kubenswrapper[4985]: I0128 18:51:22.984993 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerStarted","Data":"de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15"} Jan 28 18:51:23 crc kubenswrapper[4985]: I0128 18:51:23.009945 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" podStartSLOduration=2.551916071 podStartE2EDuration="3.009921578s" podCreationTimestamp="2026-01-28 18:51:20 +0000 UTC" firstStartedPulling="2026-01-28 18:51:21.966603154 +0000 UTC m=+2292.793165975" lastFinishedPulling="2026-01-28 18:51:22.424608641 +0000 UTC m=+2293.251171482" observedRunningTime="2026-01-28 18:51:23.001448438 +0000 UTC m=+2293.828011259" watchObservedRunningTime="2026-01-28 18:51:23.009921578 +0000 UTC m=+2293.836484409" Jan 28 18:51:24 crc kubenswrapper[4985]: I0128 18:51:24.264441 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:24 crc kubenswrapper[4985]: E0128 18:51:24.265087 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:35 crc kubenswrapper[4985]: I0128 18:51:35.263881 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:35 crc kubenswrapper[4985]: E0128 18:51:35.264739 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:41 crc kubenswrapper[4985]: I0128 18:51:41.241685 4985 scope.go:117] "RemoveContainer" containerID="db5c8f620d59499400c9788d3b5dfb76a365065e272b490b2eae142e49cd78fa" Jan 28 18:51:47 crc kubenswrapper[4985]: I0128 18:51:47.264694 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:51:47 crc kubenswrapper[4985]: E0128 18:51:47.265727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:51:57 crc kubenswrapper[4985]: I0128 18:51:57.359580 4985 generic.go:334] "Generic (PLEG): container finished" podID="3baf8df5-1989-4678-8268-058f46511cfd" containerID="4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572" exitCode=0 Jan 28 18:51:57 crc kubenswrapper[4985]: I0128 18:51:57.359700 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerDied","Data":"4383685dfaa76d9d94b6ad842212a447752cd35bd7edf70dce99f868bdd8e572"} Jan 28 18:51:58 crc kubenswrapper[4985]: I0128 18:51:58.863745 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.062655 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.062936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.063058 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") pod \"3baf8df5-1989-4678-8268-058f46511cfd\" (UID: \"3baf8df5-1989-4678-8268-058f46511cfd\") " Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.068597 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn" (OuterVolumeSpecName: "kube-api-access-htjkn") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "kube-api-access-htjkn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.099824 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory" (OuterVolumeSpecName: "inventory") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.107131 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3baf8df5-1989-4678-8268-058f46511cfd" (UID: "3baf8df5-1989-4678-8268-058f46511cfd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166503 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166779 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3baf8df5-1989-4678-8268-058f46511cfd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.166872 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjkn\" (UniqueName: \"kubernetes.io/projected/3baf8df5-1989-4678-8268-058f46511cfd-kube-api-access-htjkn\") on node \"crc\" DevicePath \"\"" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" event={"ID":"3baf8df5-1989-4678-8268-058f46511cfd","Type":"ContainerDied","Data":"de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15"} Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382988 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de3c10e770d08fa92a7e1977751e9575e957222523a49e6ed9cb591a0045fa15" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.382993 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-25775" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.463419 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:51:59 crc kubenswrapper[4985]: E0128 18:51:59.463900 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.463929 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.464218 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3baf8df5-1989-4678-8268-058f46511cfd" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.465060 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467749 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467830 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.467977 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.468898 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.481465 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.574989 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.576014 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.576057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.678719 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.679063 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.679344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.693363 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.693405 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.705840 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:51:59 crc kubenswrapper[4985]: I0128 18:51:59.782340 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.264662 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:00 crc kubenswrapper[4985]: E0128 18:52:00.265266 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.375472 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc"] Jan 28 18:52:00 crc kubenswrapper[4985]: I0128 18:52:00.396754 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerStarted","Data":"69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096"} Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.420617 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerStarted","Data":"0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b"} Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.443442 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" podStartSLOduration=1.769456439 podStartE2EDuration="2.443419345s" podCreationTimestamp="2026-01-28 18:51:59 +0000 UTC" firstStartedPulling="2026-01-28 18:52:00.373607941 +0000 UTC m=+2331.200170762" lastFinishedPulling="2026-01-28 18:52:01.047570847 +0000 UTC m=+2331.874133668" observedRunningTime="2026-01-28 18:52:01.437389034 +0000 UTC m=+2332.263951875" watchObservedRunningTime="2026-01-28 18:52:01.443419345 +0000 UTC m=+2332.269982166" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.692322 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.694563 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.717900 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844387 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.844468 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946361 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946558 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.946662 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.947080 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.947109 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:01 crc kubenswrapper[4985]: I0128 18:52:01.965323 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"community-operators-92ddg\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:02 crc kubenswrapper[4985]: I0128 18:52:02.032365 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:02 crc kubenswrapper[4985]: I0128 18:52:02.607183 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.446897 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" exitCode=0 Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.447215 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d"} Jan 28 18:52:03 crc kubenswrapper[4985]: I0128 18:52:03.447302 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"c4796d97bbbc44e9555f2a920af4e29b811b1c5305de97b4f4d8ea5af4e33a12"} Jan 28 18:52:06 crc kubenswrapper[4985]: I0128 18:52:06.482943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} Jan 28 18:52:10 crc kubenswrapper[4985]: I0128 18:52:10.532784 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" exitCode=0 Jan 28 18:52:10 crc kubenswrapper[4985]: I0128 18:52:10.532906 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.265645 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:13 crc kubenswrapper[4985]: E0128 18:52:13.266493 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.584376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerStarted","Data":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} Jan 28 18:52:13 crc kubenswrapper[4985]: I0128 18:52:13.605588 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-92ddg" podStartSLOduration=3.117826319 podStartE2EDuration="12.605562409s" podCreationTimestamp="2026-01-28 18:52:01 +0000 UTC" firstStartedPulling="2026-01-28 18:52:03.449571925 +0000 UTC m=+2334.276134756" lastFinishedPulling="2026-01-28 18:52:12.937308025 +0000 UTC m=+2343.763870846" observedRunningTime="2026-01-28 18:52:13.601188755 +0000 UTC m=+2344.427751606" watchObservedRunningTime="2026-01-28 18:52:13.605562409 +0000 UTC m=+2344.432125230" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.032699 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.036065 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.091601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.806787 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:22 crc kubenswrapper[4985]: I0128 18:52:22.857911 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:24 crc kubenswrapper[4985]: I0128 18:52:24.787600 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-92ddg" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" containerID="cri-o://f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" gracePeriod=2 Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.344279 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504114 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504336 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.504374 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") pod \"2599bc38-c112-4351-a069-1e7f48fd913e\" (UID: \"2599bc38-c112-4351-a069-1e7f48fd913e\") " Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.505395 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities" (OuterVolumeSpecName: "utilities") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.510943 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj" (OuterVolumeSpecName: "kube-api-access-wf6mj") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "kube-api-access-wf6mj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.556451 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2599bc38-c112-4351-a069-1e7f48fd913e" (UID: "2599bc38-c112-4351-a069-1e7f48fd913e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606890 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wf6mj\" (UniqueName: \"kubernetes.io/projected/2599bc38-c112-4351-a069-1e7f48fd913e-kube-api-access-wf6mj\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606918 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.606927 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2599bc38-c112-4351-a069-1e7f48fd913e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807053 4985 generic.go:334] "Generic (PLEG): container finished" podID="2599bc38-c112-4351-a069-1e7f48fd913e" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" exitCode=0 Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807103 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807130 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-92ddg" event={"ID":"2599bc38-c112-4351-a069-1e7f48fd913e","Type":"ContainerDied","Data":"c4796d97bbbc44e9555f2a920af4e29b811b1c5305de97b4f4d8ea5af4e33a12"} Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807147 4985 scope.go:117] "RemoveContainer" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.807243 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-92ddg" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.848096 4985 scope.go:117] "RemoveContainer" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.873482 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.877522 4985 scope.go:117] "RemoveContainer" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.888303 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-92ddg"] Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940213 4985 scope.go:117] "RemoveContainer" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.940631 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": container with ID starting with f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764 not found: ID does not exist" containerID="f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940664 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764"} err="failed to get container status \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": rpc error: code = NotFound desc = could not find container \"f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764\": container with ID starting with f14040cf7d74da758ae14b27168bf4f373cd86cf87837618d45e1b35086ff764 not found: ID does not exist" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.940684 4985 scope.go:117] "RemoveContainer" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.941088 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": container with ID starting with 8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0 not found: ID does not exist" containerID="8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941125 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0"} err="failed to get container status \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": rpc error: code = NotFound desc = could not find container \"8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0\": container with ID starting with 8c02c04fc3b9fc2a873fe6d5fd7b9b84206c6ad4a1cf848fb0149eab5a7f49c0 not found: ID does not exist" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941150 4985 scope.go:117] "RemoveContainer" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: E0128 18:52:25.941460 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": container with ID starting with 3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d not found: ID does not exist" containerID="3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d" Jan 28 18:52:25 crc kubenswrapper[4985]: I0128 18:52:25.941534 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d"} err="failed to get container status \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": rpc error: code = NotFound desc = could not find container \"3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d\": container with ID starting with 3c6505e2fe7115b1c1d4a8272b8c3ca60e5c64405ac08bfda1f38ac39503666d not found: ID does not exist" Jan 28 18:52:26 crc kubenswrapper[4985]: I0128 18:52:26.264697 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:26 crc kubenswrapper[4985]: E0128 18:52:26.264946 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:27 crc kubenswrapper[4985]: I0128 18:52:27.278474 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" path="/var/lib/kubelet/pods/2599bc38-c112-4351-a069-1e7f48fd913e/volumes" Jan 28 18:52:30 crc kubenswrapper[4985]: I0128 18:52:30.058952 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:52:30 crc kubenswrapper[4985]: I0128 18:52:30.068966 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/heat-db-sync-r7ml7"] Jan 28 18:52:31 crc kubenswrapper[4985]: I0128 18:52:31.279676 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="627220be-fa5f-49a6-9c9e-b3ae5e49afec" path="/var/lib/kubelet/pods/627220be-fa5f-49a6-9c9e-b3ae5e49afec/volumes" Jan 28 18:52:40 crc kubenswrapper[4985]: I0128 18:52:40.264153 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:40 crc kubenswrapper[4985]: E0128 18:52:40.264952 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:41 crc kubenswrapper[4985]: I0128 18:52:41.303885 4985 scope.go:117] "RemoveContainer" containerID="48668effb10b8c0dfeaba93e4a156675d4c8985321775751a1f4f96f69975324" Jan 28 18:52:45 crc kubenswrapper[4985]: I0128 18:52:45.033563 4985 generic.go:334] "Generic (PLEG): container finished" podID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerID="0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b" exitCode=0 Jan 28 18:52:45 crc kubenswrapper[4985]: I0128 18:52:45.033654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerDied","Data":"0e0531b2a17e581c154af6c43df638fbe2cddb08d8bf5196709cce369d24856b"} Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.623371 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.775235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.775718 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.776122 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") pod \"89fa72dd-7320-41fe-8df4-161d84d41b84\" (UID: \"89fa72dd-7320-41fe-8df4-161d84d41b84\") " Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.780572 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5" (OuterVolumeSpecName: "kube-api-access-nhtg5") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "kube-api-access-nhtg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.808050 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.817592 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory" (OuterVolumeSpecName: "inventory") pod "89fa72dd-7320-41fe-8df4-161d84d41b84" (UID: "89fa72dd-7320-41fe-8df4-161d84d41b84"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879004 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nhtg5\" (UniqueName: \"kubernetes.io/projected/89fa72dd-7320-41fe-8df4-161d84d41b84-kube-api-access-nhtg5\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879041 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:46 crc kubenswrapper[4985]: I0128 18:52:46.879050 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/89fa72dd-7320-41fe-8df4-161d84d41b84-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" event={"ID":"89fa72dd-7320-41fe-8df4-161d84d41b84","Type":"ContainerDied","Data":"69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096"} Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055119 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69e84bb4165150e69508936186fc071f5e407b53051ee5c709bb96091d6e8096" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.055128 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.161646 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162201 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-content" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162217 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-content" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162232 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162240 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162422 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162432 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: E0128 18:52:47.162458 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-utilities" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162465 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="extract-utilities" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162672 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="89fa72dd-7320-41fe-8df4-161d84d41b84" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.162702 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2599bc38-c112-4351-a069-1e7f48fd913e" containerName="registry-server" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.163576 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.166240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167220 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167308 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.167902 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.174797 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287041 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287118 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.287158 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389043 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389118 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.389151 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.393442 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.394411 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.406736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"ssh-known-hosts-edpm-deployment-pbrcd\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:47 crc kubenswrapper[4985]: I0128 18:52:47.500715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:48 crc kubenswrapper[4985]: I0128 18:52:48.100555 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-pbrcd"] Jan 28 18:52:48 crc kubenswrapper[4985]: E0128 18:52:48.107619 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99c460d4_80df_4aac_9fc5_20198855b361.slice/crio-21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d\": RecentStats: unable to find data in memory cache]" Jan 28 18:52:48 crc kubenswrapper[4985]: I0128 18:52:48.109133 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:52:49 crc kubenswrapper[4985]: I0128 18:52:49.076536 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerStarted","Data":"21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d"} Jan 28 18:52:50 crc kubenswrapper[4985]: I0128 18:52:50.088449 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerStarted","Data":"2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177"} Jan 28 18:52:50 crc kubenswrapper[4985]: I0128 18:52:50.107238 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" podStartSLOduration=1.949089676 podStartE2EDuration="3.107213647s" podCreationTimestamp="2026-01-28 18:52:47 +0000 UTC" firstStartedPulling="2026-01-28 18:52:48.108891268 +0000 UTC m=+2378.935454089" lastFinishedPulling="2026-01-28 18:52:49.267015249 +0000 UTC m=+2380.093578060" observedRunningTime="2026-01-28 18:52:50.102196145 +0000 UTC m=+2380.928758956" watchObservedRunningTime="2026-01-28 18:52:50.107213647 +0000 UTC m=+2380.933776468" Jan 28 18:52:54 crc kubenswrapper[4985]: I0128 18:52:54.264233 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:52:54 crc kubenswrapper[4985]: E0128 18:52:54.264947 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:52:56 crc kubenswrapper[4985]: I0128 18:52:56.166921 4985 generic.go:334] "Generic (PLEG): container finished" podID="99c460d4-80df-4aac-9fc5-20198855b361" containerID="2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177" exitCode=0 Jan 28 18:52:56 crc kubenswrapper[4985]: I0128 18:52:56.167226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerDied","Data":"2741ec846d0c85b125eb72113b900c63992136cdaabaee56c98434e51f940177"} Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.669641 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.830138 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.830283 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.831152 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") pod \"99c460d4-80df-4aac-9fc5-20198855b361\" (UID: \"99c460d4-80df-4aac-9fc5-20198855b361\") " Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.842321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg" (OuterVolumeSpecName: "kube-api-access-dc4kg") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "kube-api-access-dc4kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.862273 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.863603 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "99c460d4-80df-4aac-9fc5-20198855b361" (UID: "99c460d4-80df-4aac-9fc5-20198855b361"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934431 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934474 4985 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/99c460d4-80df-4aac-9fc5-20198855b361-inventory-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:57 crc kubenswrapper[4985]: I0128 18:52:57.934486 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc4kg\" (UniqueName: \"kubernetes.io/projected/99c460d4-80df-4aac-9fc5-20198855b361-kube-api-access-dc4kg\") on node \"crc\" DevicePath \"\"" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199533 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" event={"ID":"99c460d4-80df-4aac-9fc5-20198855b361","Type":"ContainerDied","Data":"21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d"} Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199576 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21f3a114fb34bc172393e6035f99b2c7a47aa748ffdcd1a9d9718c53a6ff848d" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.199660 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-pbrcd" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.273535 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:58 crc kubenswrapper[4985]: E0128 18:52:58.274272 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.274288 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.274519 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="99c460d4-80df-4aac-9fc5-20198855b361" containerName="ssh-known-hosts-edpm-deployment" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.275483 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278318 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278337 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.278551 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.279109 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.316677 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464236 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464362 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.464584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.567421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.567571 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.568625 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.574144 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.579805 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.590782 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-8kf5l\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:58 crc kubenswrapper[4985]: I0128 18:52:58.603678 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:52:59 crc kubenswrapper[4985]: I0128 18:52:59.144959 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l"] Jan 28 18:52:59 crc kubenswrapper[4985]: I0128 18:52:59.211956 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerStarted","Data":"dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50"} Jan 28 18:53:00 crc kubenswrapper[4985]: I0128 18:53:00.224312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerStarted","Data":"1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651"} Jan 28 18:53:00 crc kubenswrapper[4985]: I0128 18:53:00.248883 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" podStartSLOduration=1.8293697500000001 podStartE2EDuration="2.248867466s" podCreationTimestamp="2026-01-28 18:52:58 +0000 UTC" firstStartedPulling="2026-01-28 18:52:59.151427572 +0000 UTC m=+2389.977990393" lastFinishedPulling="2026-01-28 18:52:59.570925258 +0000 UTC m=+2390.397488109" observedRunningTime="2026-01-28 18:53:00.241907769 +0000 UTC m=+2391.068470590" watchObservedRunningTime="2026-01-28 18:53:00.248867466 +0000 UTC m=+2391.075430287" Jan 28 18:53:07 crc kubenswrapper[4985]: I0128 18:53:07.265492 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:07 crc kubenswrapper[4985]: E0128 18:53:07.267052 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:08 crc kubenswrapper[4985]: I0128 18:53:08.348396 4985 generic.go:334] "Generic (PLEG): container finished" podID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerID="1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651" exitCode=0 Jan 28 18:53:08 crc kubenswrapper[4985]: I0128 18:53:08.348527 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerDied","Data":"1e43cecc1e91746954a01e7c22855fd2395a40008bd1135ede7e01312ad4e651"} Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.340280 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.380749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" event={"ID":"748912b6-cdb7-40bc-875e-563d7913a6dd","Type":"ContainerDied","Data":"dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50"} Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.381514 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcfe22e8dda947e5709a88443fa0516b970a985732c45bb442af182dc3677b50" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.380816 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-8kf5l" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460188 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460306 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.460457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") pod \"748912b6-cdb7-40bc-875e-563d7913a6dd\" (UID: \"748912b6-cdb7-40bc-875e-563d7913a6dd\") " Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467105 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s" (OuterVolumeSpecName: "kube-api-access-zxz7s") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "kube-api-access-zxz7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: E0128 18:53:10.467821 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.467845 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.468156 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="748912b6-cdb7-40bc-875e-563d7913a6dd" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.469174 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.476832 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.511782 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.522731 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory" (OuterVolumeSpecName: "inventory") pod "748912b6-cdb7-40bc-875e-563d7913a6dd" (UID: "748912b6-cdb7-40bc-875e-563d7913a6dd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.562782 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.562997 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563052 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563142 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563155 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zxz7s\" (UniqueName: \"kubernetes.io/projected/748912b6-cdb7-40bc-875e-563d7913a6dd-kube-api-access-zxz7s\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.563165 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/748912b6-cdb7-40bc-875e-563d7913a6dd-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.664867 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.664973 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.667412 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.668903 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.669263 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.682018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:10 crc kubenswrapper[4985]: I0128 18:53:10.913834 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:11 crc kubenswrapper[4985]: I0128 18:53:11.446751 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb"] Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.404931 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerStarted","Data":"2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035"} Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.405331 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerStarted","Data":"de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c"} Jan 28 18:53:12 crc kubenswrapper[4985]: I0128 18:53:12.433935 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" podStartSLOduration=1.935634587 podStartE2EDuration="2.433916992s" podCreationTimestamp="2026-01-28 18:53:10 +0000 UTC" firstStartedPulling="2026-01-28 18:53:11.44546376 +0000 UTC m=+2402.272026581" lastFinishedPulling="2026-01-28 18:53:11.943746165 +0000 UTC m=+2402.770308986" observedRunningTime="2026-01-28 18:53:12.421018147 +0000 UTC m=+2403.247580968" watchObservedRunningTime="2026-01-28 18:53:12.433916992 +0000 UTC m=+2403.260479813" Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.276983 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:21 crc kubenswrapper[4985]: E0128 18:53:21.278955 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.498412 4985 generic.go:334] "Generic (PLEG): container finished" podID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerID="2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035" exitCode=0 Jan 28 18:53:21 crc kubenswrapper[4985]: I0128 18:53:21.498509 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerDied","Data":"2f9ce5b67c7b62c616f681b2a0211eaf2edaa3939c553a248a0d4ed67636d035"} Jan 28 18:53:22 crc kubenswrapper[4985]: I0128 18:53:22.966747 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094563 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.094587 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") pod \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\" (UID: \"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1\") " Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.101195 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj" (OuterVolumeSpecName: "kube-api-access-rltjj") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "kube-api-access-rltjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.131087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory" (OuterVolumeSpecName: "inventory") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.133007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" (UID: "b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197148 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rltjj\" (UniqueName: \"kubernetes.io/projected/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-kube-api-access-rltjj\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197179 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.197188 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.524589 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" event={"ID":"b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1","Type":"ContainerDied","Data":"de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c"} Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.525095 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de71345ecee583b6977af81b154580f32016dcb1dd583e6778840ce7062e010c" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.524655 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639020 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:23 crc kubenswrapper[4985]: E0128 18:53:23.639574 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639595 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.639830 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.641120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.643517 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.645721 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.645839 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646024 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646226 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646028 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646418 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.646683 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.647436 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.665123 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.811630 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.811908 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812065 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812462 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812690 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812750 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812799 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812891 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812942 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.812977 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813095 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813372 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813404 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.813442 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915430 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915527 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915561 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915592 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915697 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915725 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915779 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915826 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915904 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915936 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.915967 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916053 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916114 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.916146 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.920603 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.922394 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.922820 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.923555 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.923673 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.924205 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.924465 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925031 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925285 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925380 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.925484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.927367 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.932944 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.933042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.938019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:23 crc kubenswrapper[4985]: I0128 18:53:23.961109 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:53:24 crc kubenswrapper[4985]: I0128 18:53:24.533348 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl"] Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.547843 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerStarted","Data":"5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7"} Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.548186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerStarted","Data":"86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1"} Jan 28 18:53:25 crc kubenswrapper[4985]: I0128 18:53:25.571711 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" podStartSLOduration=2.104136209 podStartE2EDuration="2.571681895s" podCreationTimestamp="2026-01-28 18:53:23 +0000 UTC" firstStartedPulling="2026-01-28 18:53:24.602360205 +0000 UTC m=+2415.428923026" lastFinishedPulling="2026-01-28 18:53:25.069905891 +0000 UTC m=+2415.896468712" observedRunningTime="2026-01-28 18:53:25.570180543 +0000 UTC m=+2416.396743374" watchObservedRunningTime="2026-01-28 18:53:25.571681895 +0000 UTC m=+2416.398244716" Jan 28 18:53:32 crc kubenswrapper[4985]: I0128 18:53:32.264910 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:32 crc kubenswrapper[4985]: E0128 18:53:32.267060 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:47 crc kubenswrapper[4985]: I0128 18:53:47.264029 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:47 crc kubenswrapper[4985]: E0128 18:53:47.264902 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:53:58 crc kubenswrapper[4985]: I0128 18:53:58.265079 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:53:58 crc kubenswrapper[4985]: E0128 18:53:58.266124 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.060637 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.076719 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/aodh-db-sync-6bqfv"] Jan 28 18:54:03 crc kubenswrapper[4985]: I0128 18:54:03.278032 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d276e0b0-f662-443c-a126-003ee44287c8" path="/var/lib/kubelet/pods/d276e0b0-f662-443c-a126-003ee44287c8/volumes" Jan 28 18:54:05 crc kubenswrapper[4985]: I0128 18:54:05.017814 4985 generic.go:334] "Generic (PLEG): container finished" podID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerID="5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7" exitCode=0 Jan 28 18:54:05 crc kubenswrapper[4985]: I0128 18:54:05.018086 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerDied","Data":"5cbc89b308fc84d66f980bf1fb8675be5069ce9c0cf07f70762c9a3fe97801e7"} Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.496924 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592726 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592837 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592882 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592927 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.592959 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593051 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593077 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593124 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593187 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593230 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593312 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593329 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593359 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593433 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.593478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"50ce12a8-7d79-4fa2-a879-e3082ba41427\" (UID: \"50ce12a8-7d79-4fa2-a879-e3082ba41427\") " Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601008 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601435 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.601485 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4" (OuterVolumeSpecName: "kube-api-access-brbd4") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "kube-api-access-brbd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.603300 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.603850 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.604068 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.604915 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.606879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.607359 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.607875 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.608187 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.608988 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.609462 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.610474 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.636671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.641470 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory" (OuterVolumeSpecName: "inventory") pod "50ce12a8-7d79-4fa2-a879-e3082ba41427" (UID: "50ce12a8-7d79-4fa2-a879-e3082ba41427"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696687 4985 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696746 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696765 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696781 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696795 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brbd4\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-kube-api-access-brbd4\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696810 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696828 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696843 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696858 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696873 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696885 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696916 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696931 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-telemetry-power-monitoring-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696947 4985 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696961 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50ce12a8-7d79-4fa2-a879-e3082ba41427-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:06 crc kubenswrapper[4985]: I0128 18:54:06.696977 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/50ce12a8-7d79-4fa2-a879-e3082ba41427-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040085 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" event={"ID":"50ce12a8-7d79-4fa2-a879-e3082ba41427","Type":"ContainerDied","Data":"86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1"} Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040136 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86b2b13c1f9b434c2e5a83de4df662da5429c9dffd92a5ab4c0c55d94d2c48a1" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.040161 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.192562 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:07 crc kubenswrapper[4985]: E0128 18:54:07.193289 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.193307 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.193564 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="50ce12a8-7d79-4fa2-a879-e3082ba41427" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.194437 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.200839 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201012 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201132 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201240 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.201373 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.209533 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312870 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312950 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.312980 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.313013 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.313049 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415413 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415505 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415552 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415600 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.415653 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.416691 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.420745 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.428979 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.434062 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.448569 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-h47tw\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:07 crc kubenswrapper[4985]: I0128 18:54:07.528953 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:54:08 crc kubenswrapper[4985]: I0128 18:54:08.098954 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw"] Jan 28 18:54:09 crc kubenswrapper[4985]: I0128 18:54:09.065102 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerStarted","Data":"a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027"} Jan 28 18:54:09 crc kubenswrapper[4985]: I0128 18:54:09.068269 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerStarted","Data":"9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618"} Jan 28 18:54:10 crc kubenswrapper[4985]: I0128 18:54:10.104082 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" podStartSLOduration=2.4608100840000002 podStartE2EDuration="3.10405819s" podCreationTimestamp="2026-01-28 18:54:07 +0000 UTC" firstStartedPulling="2026-01-28 18:54:08.105457113 +0000 UTC m=+2458.932019934" lastFinishedPulling="2026-01-28 18:54:08.748705219 +0000 UTC m=+2459.575268040" observedRunningTime="2026-01-28 18:54:10.098190174 +0000 UTC m=+2460.924753005" watchObservedRunningTime="2026-01-28 18:54:10.10405819 +0000 UTC m=+2460.930621011" Jan 28 18:54:12 crc kubenswrapper[4985]: I0128 18:54:12.265497 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:12 crc kubenswrapper[4985]: E0128 18:54:12.266526 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:23 crc kubenswrapper[4985]: I0128 18:54:23.264984 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:23 crc kubenswrapper[4985]: E0128 18:54:23.265895 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:38 crc kubenswrapper[4985]: I0128 18:54:38.265114 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:38 crc kubenswrapper[4985]: E0128 18:54:38.265991 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:54:41 crc kubenswrapper[4985]: I0128 18:54:41.407560 4985 scope.go:117] "RemoveContainer" containerID="7dec6fdf3bc8770aef28236161fb96819a55a36d37cd04df32abd054cd4e7c4d" Jan 28 18:54:52 crc kubenswrapper[4985]: I0128 18:54:52.264983 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:54:52 crc kubenswrapper[4985]: E0128 18:54:52.266288 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:03 crc kubenswrapper[4985]: I0128 18:55:03.264547 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:03 crc kubenswrapper[4985]: E0128 18:55:03.265879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:06 crc kubenswrapper[4985]: I0128 18:55:06.768174 4985 generic.go:334] "Generic (PLEG): container finished" podID="7b281922-4bb4-45f8-b633-d82925f4814e" containerID="a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027" exitCode=0 Jan 28 18:55:06 crc kubenswrapper[4985]: I0128 18:55:06.768269 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerDied","Data":"a6bdec8510499a26c27cbda2b2c45b9cd3c5e0612fdc037ef6a4027ab34f7027"} Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.336798 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.432934 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433099 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433332 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.433446 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") pod \"7b281922-4bb4-45f8-b633-d82925f4814e\" (UID: \"7b281922-4bb4-45f8-b633-d82925f4814e\") " Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.481240 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2" (OuterVolumeSpecName: "kube-api-access-gw4l2") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "kube-api-access-gw4l2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.494571 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.545674 4985 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.545717 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw4l2\" (UniqueName: \"kubernetes.io/projected/7b281922-4bb4-45f8-b633-d82925f4814e-kube-api-access-gw4l2\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.589535 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.592589 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory" (OuterVolumeSpecName: "inventory") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.614481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7b281922-4bb4-45f8-b633-d82925f4814e" (UID: "7b281922-4bb4-45f8-b633-d82925f4814e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648524 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648567 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7b281922-4bb4-45f8-b633-d82925f4814e-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.648580 4985 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/7b281922-4bb4-45f8-b633-d82925f4814e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789597 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" event={"ID":"7b281922-4bb4-45f8-b633-d82925f4814e","Type":"ContainerDied","Data":"9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618"} Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789958 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9be887f338d3681c1a810a44831d9c5beb00ea3f830c83597e9b4895f61de618" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.789669 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-h47tw" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905099 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:08 crc kubenswrapper[4985]: E0128 18:55:08.905725 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905748 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.905997 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b281922-4bb4-45f8-b633-d82925f4814e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.906996 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.910194 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911035 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911395 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911472 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911615 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.911624 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.922938 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.955966 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956058 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956130 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956206 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956283 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:08 crc kubenswrapper[4985]: I0128 18:55:08.956640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.060420 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.060655 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061410 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061537 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.061591 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.068046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.068879 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.069085 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.070527 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.071929 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.084660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.259271 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:09 crc kubenswrapper[4985]: I0128 18:55:09.920033 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr"] Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.827084 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerStarted","Data":"ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8"} Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.827556 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerStarted","Data":"6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322"} Jan 28 18:55:10 crc kubenswrapper[4985]: I0128 18:55:10.844435 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" podStartSLOduration=2.257632971 podStartE2EDuration="2.84441119s" podCreationTimestamp="2026-01-28 18:55:08 +0000 UTC" firstStartedPulling="2026-01-28 18:55:09.926120553 +0000 UTC m=+2520.752683374" lastFinishedPulling="2026-01-28 18:55:10.512898772 +0000 UTC m=+2521.339461593" observedRunningTime="2026-01-28 18:55:10.843040342 +0000 UTC m=+2521.669603163" watchObservedRunningTime="2026-01-28 18:55:10.84441119 +0000 UTC m=+2521.670974011" Jan 28 18:55:15 crc kubenswrapper[4985]: I0128 18:55:15.265374 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:15 crc kubenswrapper[4985]: E0128 18:55:15.266510 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:29 crc kubenswrapper[4985]: I0128 18:55:29.264441 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:29 crc kubenswrapper[4985]: E0128 18:55:29.266080 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:44 crc kubenswrapper[4985]: I0128 18:55:44.264759 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:44 crc kubenswrapper[4985]: E0128 18:55:44.265513 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:56 crc kubenswrapper[4985]: I0128 18:55:56.375792 4985 generic.go:334] "Generic (PLEG): container finished" podID="85887caf-94f1-4f74-820c-edba2628a8e6" containerID="ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8" exitCode=0 Jan 28 18:55:56 crc kubenswrapper[4985]: I0128 18:55:56.375898 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerDied","Data":"ac963befe690cdc1d35b858dad9c3859445a9726968785eea97d6ee2715cdae8"} Jan 28 18:55:57 crc kubenswrapper[4985]: I0128 18:55:57.264564 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:55:57 crc kubenswrapper[4985]: E0128 18:55:57.265148 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 18:55:57 crc kubenswrapper[4985]: I0128 18:55:57.901891 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064158 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064720 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.064881 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.065063 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.065213 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") pod \"85887caf-94f1-4f74-820c-edba2628a8e6\" (UID: \"85887caf-94f1-4f74-820c-edba2628a8e6\") " Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.070662 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.071199 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j" (OuterVolumeSpecName: "kube-api-access-rgs7j") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "kube-api-access-rgs7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.098749 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.104037 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.104831 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory" (OuterVolumeSpecName: "inventory") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.122760 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "85887caf-94f1-4f74-820c-edba2628a8e6" (UID: "85887caf-94f1-4f74-820c-edba2628a8e6"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168685 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168735 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168754 4985 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168767 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgs7j\" (UniqueName: \"kubernetes.io/projected/85887caf-94f1-4f74-820c-edba2628a8e6-kube-api-access-rgs7j\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168779 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.168789 4985 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/85887caf-94f1-4f74-820c-edba2628a8e6-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" event={"ID":"85887caf-94f1-4f74-820c-edba2628a8e6","Type":"ContainerDied","Data":"6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322"} Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406281 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6eb7939a3a6d53cf73783e1b7daf079cac16b5c1e0797439ef1444a93fe33322" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.406343 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.644934 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:55:58 crc kubenswrapper[4985]: E0128 18:55:58.645775 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.645795 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.646073 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="85887caf-94f1-4f74-820c-edba2628a8e6" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.646879 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.650994 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651223 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651313 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651343 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.651412 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.670487 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703332 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703391 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.703660 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.704000 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.805388 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.805507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806306 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806377 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.806401 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.810742 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.811509 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.812728 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.814024 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.831660 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-swns9\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:58 crc kubenswrapper[4985]: I0128 18:55:58.972979 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:55:59 crc kubenswrapper[4985]: I0128 18:55:59.566306 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9"] Jan 28 18:56:00 crc kubenswrapper[4985]: I0128 18:56:00.430578 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerStarted","Data":"76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834"} Jan 28 18:56:02 crc kubenswrapper[4985]: I0128 18:56:02.464964 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerStarted","Data":"0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2"} Jan 28 18:56:02 crc kubenswrapper[4985]: I0128 18:56:02.488054 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" podStartSLOduration=2.656869108 podStartE2EDuration="4.488027348s" podCreationTimestamp="2026-01-28 18:55:58 +0000 UTC" firstStartedPulling="2026-01-28 18:55:59.585865382 +0000 UTC m=+2570.412428203" lastFinishedPulling="2026-01-28 18:56:01.417023622 +0000 UTC m=+2572.243586443" observedRunningTime="2026-01-28 18:56:02.482636086 +0000 UTC m=+2573.309198907" watchObservedRunningTime="2026-01-28 18:56:02.488027348 +0000 UTC m=+2573.314590169" Jan 28 18:56:11 crc kubenswrapper[4985]: I0128 18:56:11.273932 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:56:11 crc kubenswrapper[4985]: I0128 18:56:11.565024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.719724 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.722845 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.736406 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777241 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777509 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.777853 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880776 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880912 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.880961 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.881635 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.881661 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:37 crc kubenswrapper[4985]: I0128 18:57:37.900784 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"redhat-operators-m2zw4\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:38 crc kubenswrapper[4985]: I0128 18:57:38.050715 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:38 crc kubenswrapper[4985]: I0128 18:57:38.581098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.556463 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" exitCode=0 Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.557751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634"} Jan 28 18:57:39 crc kubenswrapper[4985]: I0128 18:57:39.557791 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"86e211ca3609ca2214d96788321bae078f1513b7cc9bb22c267e07e77fc71907"} Jan 28 18:57:41 crc kubenswrapper[4985]: I0128 18:57:41.598535 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} Jan 28 18:57:45 crc kubenswrapper[4985]: I0128 18:57:45.645300 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" exitCode=0 Jan 28 18:57:45 crc kubenswrapper[4985]: I0128 18:57:45.645346 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} Jan 28 18:57:47 crc kubenswrapper[4985]: I0128 18:57:47.670735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerStarted","Data":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} Jan 28 18:57:47 crc kubenswrapper[4985]: I0128 18:57:47.702829 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-m2zw4" podStartSLOduration=2.887646418 podStartE2EDuration="10.702803585s" podCreationTimestamp="2026-01-28 18:57:37 +0000 UTC" firstStartedPulling="2026-01-28 18:57:39.562872011 +0000 UTC m=+2670.389434842" lastFinishedPulling="2026-01-28 18:57:47.378029188 +0000 UTC m=+2678.204592009" observedRunningTime="2026-01-28 18:57:47.692805332 +0000 UTC m=+2678.519368153" watchObservedRunningTime="2026-01-28 18:57:47.702803585 +0000 UTC m=+2678.529366416" Jan 28 18:57:48 crc kubenswrapper[4985]: I0128 18:57:48.051322 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:48 crc kubenswrapper[4985]: I0128 18:57:48.051643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:49 crc kubenswrapper[4985]: I0128 18:57:49.110035 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-m2zw4" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" probeResult="failure" output=< Jan 28 18:57:49 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 18:57:49 crc kubenswrapper[4985]: > Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.099932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.154111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:57:58 crc kubenswrapper[4985]: I0128 18:57:58.363419 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:57:59 crc kubenswrapper[4985]: I0128 18:57:59.822392 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-m2zw4" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" containerID="cri-o://5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" gracePeriod=2 Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.447399 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.540896 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.541722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.541784 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") pod \"0ef513f4-9311-4ca7-ba53-391e37295f4d\" (UID: \"0ef513f4-9311-4ca7-ba53-391e37295f4d\") " Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.543907 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities" (OuterVolumeSpecName: "utilities") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.572540 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64" (OuterVolumeSpecName: "kube-api-access-jkq64") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "kube-api-access-jkq64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.645464 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkq64\" (UniqueName: \"kubernetes.io/projected/0ef513f4-9311-4ca7-ba53-391e37295f4d-kube-api-access-jkq64\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.645523 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.702188 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0ef513f4-9311-4ca7-ba53-391e37295f4d" (UID: "0ef513f4-9311-4ca7-ba53-391e37295f4d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.747239 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0ef513f4-9311-4ca7-ba53-391e37295f4d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836537 4985 generic.go:334] "Generic (PLEG): container finished" podID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" exitCode=0 Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836609 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836645 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-m2zw4" event={"ID":"0ef513f4-9311-4ca7-ba53-391e37295f4d","Type":"ContainerDied","Data":"86e211ca3609ca2214d96788321bae078f1513b7cc9bb22c267e07e77fc71907"} Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836667 4985 scope.go:117] "RemoveContainer" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.836691 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-m2zw4" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.863305 4985 scope.go:117] "RemoveContainer" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.893031 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.907331 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-m2zw4"] Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.910214 4985 scope.go:117] "RemoveContainer" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969161 4985 scope.go:117] "RemoveContainer" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.969574 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": container with ID starting with 5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e not found: ID does not exist" containerID="5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969624 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e"} err="failed to get container status \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": rpc error: code = NotFound desc = could not find container \"5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e\": container with ID starting with 5192fada8b82dafc2f3d5102626c5247dab71d72b2ead7c64260c97adb57462e not found: ID does not exist" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.969656 4985 scope.go:117] "RemoveContainer" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.970305 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": container with ID starting with 82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5 not found: ID does not exist" containerID="82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970340 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5"} err="failed to get container status \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": rpc error: code = NotFound desc = could not find container \"82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5\": container with ID starting with 82724bee53da912dc4148181ef5f90aa431f2b340463b83d5bf954dc93a8dcf5 not found: ID does not exist" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970364 4985 scope.go:117] "RemoveContainer" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: E0128 18:58:00.970680 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": container with ID starting with 32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634 not found: ID does not exist" containerID="32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634" Jan 28 18:58:00 crc kubenswrapper[4985]: I0128 18:58:00.970734 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634"} err="failed to get container status \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": rpc error: code = NotFound desc = could not find container \"32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634\": container with ID starting with 32371000a86e4462997ced4bab89fed0990e04841171c7b3a8f7c5e2a068b634 not found: ID does not exist" Jan 28 18:58:01 crc kubenswrapper[4985]: I0128 18:58:01.276190 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" path="/var/lib/kubelet/pods/0ef513f4-9311-4ca7-ba53-391e37295f4d/volumes" Jan 28 18:58:11 crc kubenswrapper[4985]: I0128 18:58:11.185763 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:11 crc kubenswrapper[4985]: I0128 18:58:11.186317 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.912827 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914132 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-content" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914149 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-content" Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914188 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914194 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: E0128 18:58:15.914207 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-utilities" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914213 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="extract-utilities" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.914432 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ef513f4-9311-4ca7-ba53-391e37295f4d" containerName="registry-server" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.916245 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:15 crc kubenswrapper[4985]: I0128 18:58:15.929726 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.054677 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.054986 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.055503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159115 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159193 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159325 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159597 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.159734 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.190087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"redhat-marketplace-zq8sk\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.237056 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:16 crc kubenswrapper[4985]: I0128 18:58:16.722155 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:17 crc kubenswrapper[4985]: I0128 18:58:17.026678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} Jan 28 18:58:17 crc kubenswrapper[4985]: I0128 18:58:17.027016 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"d8ba6f044075ced785fa9cc45c5e2817c626522b7cd0479bc64d80543a554620"} Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.044591 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" exitCode=0 Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.044650 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} Jan 28 18:58:18 crc kubenswrapper[4985]: I0128 18:58:18.048011 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 18:58:19 crc kubenswrapper[4985]: I0128 18:58:19.058428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} Jan 28 18:58:20 crc kubenswrapper[4985]: I0128 18:58:20.074646 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" exitCode=0 Jan 28 18:58:20 crc kubenswrapper[4985]: I0128 18:58:20.074728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} Jan 28 18:58:21 crc kubenswrapper[4985]: I0128 18:58:21.095752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerStarted","Data":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} Jan 28 18:58:21 crc kubenswrapper[4985]: I0128 18:58:21.124023 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zq8sk" podStartSLOduration=3.611980361 podStartE2EDuration="6.123980732s" podCreationTimestamp="2026-01-28 18:58:15 +0000 UTC" firstStartedPulling="2026-01-28 18:58:18.047604096 +0000 UTC m=+2708.874166917" lastFinishedPulling="2026-01-28 18:58:20.559604467 +0000 UTC m=+2711.386167288" observedRunningTime="2026-01-28 18:58:21.117359725 +0000 UTC m=+2711.943922556" watchObservedRunningTime="2026-01-28 18:58:21.123980732 +0000 UTC m=+2711.950543563" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.237546 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.238088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:26 crc kubenswrapper[4985]: I0128 18:58:26.289916 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:27 crc kubenswrapper[4985]: I0128 18:58:27.206427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:27 crc kubenswrapper[4985]: I0128 18:58:27.259732 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.177460 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zq8sk" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" containerID="cri-o://2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" gracePeriod=2 Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.696875 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.809987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.810408 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.810461 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") pod \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\" (UID: \"50eaf46c-c5a3-45ec-98bb-0a22105daf95\") " Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.811096 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities" (OuterVolumeSpecName: "utilities") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.811493 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.825047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld" (OuterVolumeSpecName: "kube-api-access-lh9ld") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "kube-api-access-lh9ld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:58:29 crc kubenswrapper[4985]: I0128 18:58:29.913893 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lh9ld\" (UniqueName: \"kubernetes.io/projected/50eaf46c-c5a3-45ec-98bb-0a22105daf95-kube-api-access-lh9ld\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188833 4985 generic.go:334] "Generic (PLEG): container finished" podID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" exitCode=0 Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188885 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188926 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zq8sk" event={"ID":"50eaf46c-c5a3-45ec-98bb-0a22105daf95","Type":"ContainerDied","Data":"d8ba6f044075ced785fa9cc45c5e2817c626522b7cd0479bc64d80543a554620"} Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188965 4985 scope.go:117] "RemoveContainer" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.188964 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zq8sk" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.211661 4985 scope.go:117] "RemoveContainer" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.235665 4985 scope.go:117] "RemoveContainer" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.296671 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50eaf46c-c5a3-45ec-98bb-0a22105daf95" (UID: "50eaf46c-c5a3-45ec-98bb-0a22105daf95"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.307638 4985 scope.go:117] "RemoveContainer" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.308298 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": container with ID starting with 2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f not found: ID does not exist" containerID="2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.308347 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f"} err="failed to get container status \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": rpc error: code = NotFound desc = could not find container \"2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f\": container with ID starting with 2f65cc741ecb1d484dc90c06b756249f8e7c9870cdd7437798113a708e8c171f not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.308375 4985 scope.go:117] "RemoveContainer" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.308953 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": container with ID starting with 3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0 not found: ID does not exist" containerID="3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.309180 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0"} err="failed to get container status \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": rpc error: code = NotFound desc = could not find container \"3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0\": container with ID starting with 3ae51d630c15fa1e11ff557c64443e66a01fbc51025e2abffe21a4d411e0c1a0 not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.309200 4985 scope.go:117] "RemoveContainer" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: E0128 18:58:30.311357 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": container with ID starting with 6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765 not found: ID does not exist" containerID="6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.311404 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765"} err="failed to get container status \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": rpc error: code = NotFound desc = could not find container \"6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765\": container with ID starting with 6d6448c28cca4d543edfd0c2eedf6990c97cee428ef58ccc7e4677db640db765 not found: ID does not exist" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.328790 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50eaf46c-c5a3-45ec-98bb-0a22105daf95-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.526183 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:30 crc kubenswrapper[4985]: I0128 18:58:30.536288 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zq8sk"] Jan 28 18:58:31 crc kubenswrapper[4985]: I0128 18:58:31.280352 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" path="/var/lib/kubelet/pods/50eaf46c-c5a3-45ec-98bb-0a22105daf95/volumes" Jan 28 18:58:41 crc kubenswrapper[4985]: I0128 18:58:41.186733 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:58:41 crc kubenswrapper[4985]: I0128 18:58:41.187427 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.185611 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.186187 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.186241 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.187211 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.187294 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" gracePeriod=600 Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680018 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" exitCode=0 Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4"} Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680691 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} Jan 28 18:59:11 crc kubenswrapper[4985]: I0128 18:59:11.680711 4985 scope.go:117] "RemoveContainer" containerID="89abca5dc4cd1729e4f35182d88b99645010804a9264164dd486b6469a4f9573" Jan 28 18:59:43 crc kubenswrapper[4985]: I0128 18:59:43.039223 4985 generic.go:334] "Generic (PLEG): container finished" podID="05f3f537-0392-45c7-af0d-36294670ed29" containerID="0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2" exitCode=0 Jan 28 18:59:43 crc kubenswrapper[4985]: I0128 18:59:43.039286 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerDied","Data":"0cd11f134e26fd5286a737ab22f5900bfc3ffc7a04b1b4a5333939680ca416d2"} Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.702890 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797005 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797129 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797278 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797485 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.797547 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") pod \"05f3f537-0392-45c7-af0d-36294670ed29\" (UID: \"05f3f537-0392-45c7-af0d-36294670ed29\") " Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.803452 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.805678 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc" (OuterVolumeSpecName: "kube-api-access-tvwzc") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "kube-api-access-tvwzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.834488 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.851020 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.860567 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory" (OuterVolumeSpecName: "inventory") pod "05f3f537-0392-45c7-af0d-36294670ed29" (UID: "05f3f537-0392-45c7-af0d-36294670ed29"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.900958 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901264 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901354 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tvwzc\" (UniqueName: \"kubernetes.io/projected/05f3f537-0392-45c7-af0d-36294670ed29-kube-api-access-tvwzc\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901428 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:44 crc kubenswrapper[4985]: I0128 18:59:44.901577 4985 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/05f3f537-0392-45c7-af0d-36294670ed29-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063673 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" event={"ID":"05f3f537-0392-45c7-af0d-36294670ed29","Type":"ContainerDied","Data":"76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834"} Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063896 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="76da32fa2dc8d40e8fb07f71ee0b743aebd23afd91508409999c0fb1c42f6834" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.063719 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-swns9" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158150 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.158900 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-utilities" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158947 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-utilities" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.158982 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-content" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.158995 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="extract-content" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.159034 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159046 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: E0128 18:59:45.159111 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159127 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159621 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="50eaf46c-c5a3-45ec-98bb-0a22105daf95" containerName="registry-server" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.159695 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="05f3f537-0392-45c7-af0d-36294670ed29" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.161165 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167900 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167901 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.167911 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168555 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168578 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168590 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.168618 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.179016 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310739 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310886 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310930 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.310954 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311133 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311187 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311380 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311573 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.311650 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414097 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414167 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414209 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414237 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414327 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414435 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414487 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414580 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.414632 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.416542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.420894 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.421087 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.421279 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.422438 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.423891 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.424139 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.428551 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.433889 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"nova-edpm-deployment-openstack-edpm-ipam-68wk4\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:45 crc kubenswrapper[4985]: I0128 18:59:45.486935 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 18:59:46 crc kubenswrapper[4985]: I0128 18:59:46.101780 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4"] Jan 28 18:59:46 crc kubenswrapper[4985]: W0128 18:59:46.105811 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb129af39_361b_4dba_bdbb_31531c3a2ce9.slice/crio-3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c WatchSource:0}: Error finding container 3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c: Status 404 returned error can't find the container with id 3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c Jan 28 18:59:47 crc kubenswrapper[4985]: I0128 18:59:47.084134 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerStarted","Data":"3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c"} Jan 28 18:59:48 crc kubenswrapper[4985]: I0128 18:59:48.101881 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerStarted","Data":"0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e"} Jan 28 18:59:48 crc kubenswrapper[4985]: I0128 18:59:48.129648 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" podStartSLOduration=2.611628895 podStartE2EDuration="3.129624165s" podCreationTimestamp="2026-01-28 18:59:45 +0000 UTC" firstStartedPulling="2026-01-28 18:59:46.108495427 +0000 UTC m=+2796.935058248" lastFinishedPulling="2026-01-28 18:59:46.626490687 +0000 UTC m=+2797.453053518" observedRunningTime="2026-01-28 18:59:48.122772921 +0000 UTC m=+2798.949335762" watchObservedRunningTime="2026-01-28 18:59:48.129624165 +0000 UTC m=+2798.956186996" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.147578 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.149825 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.152052 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.153536 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.162063 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.177987 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.178216 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.178322 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.280787 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.280949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.281191 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.281663 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.303401 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.312518 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"collect-profiles-29493780-v4zzw\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:00 crc kubenswrapper[4985]: I0128 19:00:00.484099 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.050787 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.240140 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerStarted","Data":"fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7"} Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.240508 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerStarted","Data":"8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52"} Jan 28 19:00:01 crc kubenswrapper[4985]: I0128 19:00:01.268930 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" podStartSLOduration=1.268910118 podStartE2EDuration="1.268910118s" podCreationTimestamp="2026-01-28 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:00:01.258036691 +0000 UTC m=+2812.084599522" watchObservedRunningTime="2026-01-28 19:00:01.268910118 +0000 UTC m=+2812.095472939" Jan 28 19:00:02 crc kubenswrapper[4985]: I0128 19:00:02.255621 4985 generic.go:334] "Generic (PLEG): container finished" podID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerID="fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7" exitCode=0 Jan 28 19:00:02 crc kubenswrapper[4985]: I0128 19:00:02.255693 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerDied","Data":"fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7"} Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.726577 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764679 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.764996 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") pod \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\" (UID: \"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1\") " Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.769644 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume" (OuterVolumeSpecName: "config-volume") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.772729 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.787046 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p" (OuterVolumeSpecName: "kube-api-access-tql8p") pod "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" (UID: "322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1"). InnerVolumeSpecName "kube-api-access-tql8p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868516 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868556 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:03 crc kubenswrapper[4985]: I0128 19:00:03.868583 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tql8p\" (UniqueName: \"kubernetes.io/projected/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1-kube-api-access-tql8p\") on node \"crc\" DevicePath \"\"" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" event={"ID":"322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1","Type":"ContainerDied","Data":"8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52"} Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275872 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8a2fca129e9b5d437fa5b8e4e2a0cbbbc5bd4bd1ae2fbcd231460f8b55032a52" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.275614 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw" Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.339812 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 19:00:04 crc kubenswrapper[4985]: I0128 19:00:04.351544 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493735-f4d57"] Jan 28 19:00:05 crc kubenswrapper[4985]: I0128 19:00:05.287115 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1030ed14-9fc1-4ec9-a93c-13eab69320ae" path="/var/lib/kubelet/pods/1030ed14-9fc1-4ec9-a93c-13eab69320ae/volumes" Jan 28 19:00:41 crc kubenswrapper[4985]: I0128 19:00:41.695359 4985 scope.go:117] "RemoveContainer" containerID="437ea022ca695dd3c8be1cbb1b44f690df361a980e7c2eb2985b0f8b38dc9e0c" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.164580 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:00 crc kubenswrapper[4985]: E0128 19:01:00.165812 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.165834 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.166192 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" containerName="collect-profiles" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.167346 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.180385 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.288475 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.288988 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.289042 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.289121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392180 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392228 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.392421 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.400182 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.404220 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.405088 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.412736 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"keystone-cron-29493781-6kphz\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:00 crc kubenswrapper[4985]: I0128 19:01:00.494323 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.012550 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493781-6kphz"] Jan 28 19:01:01 crc kubenswrapper[4985]: W0128 19:01:01.022593 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7635ee1a_7676_44ad_af7f_ebfab7b56933.slice/crio-afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db WatchSource:0}: Error finding container afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db: Status 404 returned error can't find the container with id afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.912355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerStarted","Data":"f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47"} Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.912694 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerStarted","Data":"afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db"} Jan 28 19:01:01 crc kubenswrapper[4985]: I0128 19:01:01.960754 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493781-6kphz" podStartSLOduration=1.96073026 podStartE2EDuration="1.96073026s" podCreationTimestamp="2026-01-28 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:01:01.935177088 +0000 UTC m=+2872.761739909" watchObservedRunningTime="2026-01-28 19:01:01.96073026 +0000 UTC m=+2872.787293091" Jan 28 19:01:04 crc kubenswrapper[4985]: I0128 19:01:04.953200 4985 generic.go:334] "Generic (PLEG): container finished" podID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerID="f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47" exitCode=0 Jan 28 19:01:04 crc kubenswrapper[4985]: I0128 19:01:04.953276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerDied","Data":"f86670ac3325122c583d2e8a88920c9a20e9a32076d431e392e60b06070ddc47"} Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.383673 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495630 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495688 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495824 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.495859 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") pod \"7635ee1a-7676-44ad-af7f-ebfab7b56933\" (UID: \"7635ee1a-7676-44ad-af7f-ebfab7b56933\") " Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.501628 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz" (OuterVolumeSpecName: "kube-api-access-rcmrz") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "kube-api-access-rcmrz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.502147 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.542382 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.559510 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data" (OuterVolumeSpecName: "config-data") pod "7635ee1a-7676-44ad-af7f-ebfab7b56933" (UID: "7635ee1a-7676-44ad-af7f-ebfab7b56933"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599325 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599359 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599369 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rcmrz\" (UniqueName: \"kubernetes.io/projected/7635ee1a-7676-44ad-af7f-ebfab7b56933-kube-api-access-rcmrz\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.599380 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7635ee1a-7676-44ad-af7f-ebfab7b56933-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493781-6kphz" event={"ID":"7635ee1a-7676-44ad-af7f-ebfab7b56933","Type":"ContainerDied","Data":"afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db"} Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974504 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afb7610c275439d6dad3a63793eb281b4d96af700b47d290bb5ab634a053a1db" Jan 28 19:01:06 crc kubenswrapper[4985]: I0128 19:01:06.974232 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493781-6kphz" Jan 28 19:01:11 crc kubenswrapper[4985]: I0128 19:01:11.186789 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:01:11 crc kubenswrapper[4985]: I0128 19:01:11.187536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.965907 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:15 crc kubenswrapper[4985]: E0128 19:01:15.967559 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.967579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.967966 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7635ee1a-7676-44ad-af7f-ebfab7b56933" containerName="keystone-cron" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.972155 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:15 crc kubenswrapper[4985]: I0128 19:01:15.982705 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064160 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064523 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.064654 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167791 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167898 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.167934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.168445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.168510 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.188009 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"certified-operators-trpsd\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.303379 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:16 crc kubenswrapper[4985]: I0128 19:01:16.878012 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:17 crc kubenswrapper[4985]: I0128 19:01:17.080672 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"faaa09621c13f7869b093d973f48110c892e2f5b743c15f112d4392d8754104e"} Jan 28 19:01:18 crc kubenswrapper[4985]: I0128 19:01:18.093777 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" exitCode=0 Jan 28 19:01:18 crc kubenswrapper[4985]: I0128 19:01:18.095226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e"} Jan 28 19:01:21 crc kubenswrapper[4985]: I0128 19:01:21.151983 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} Jan 28 19:01:24 crc kubenswrapper[4985]: I0128 19:01:24.192090 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" exitCode=0 Jan 28 19:01:24 crc kubenswrapper[4985]: I0128 19:01:24.192178 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} Jan 28 19:01:25 crc kubenswrapper[4985]: I0128 19:01:25.205264 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerStarted","Data":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} Jan 28 19:01:25 crc kubenswrapper[4985]: I0128 19:01:25.227266 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-trpsd" podStartSLOduration=3.533151823 podStartE2EDuration="10.227226104s" podCreationTimestamp="2026-01-28 19:01:15 +0000 UTC" firstStartedPulling="2026-01-28 19:01:18.097856722 +0000 UTC m=+2888.924419543" lastFinishedPulling="2026-01-28 19:01:24.791931003 +0000 UTC m=+2895.618493824" observedRunningTime="2026-01-28 19:01:25.223230511 +0000 UTC m=+2896.049793332" watchObservedRunningTime="2026-01-28 19:01:25.227226104 +0000 UTC m=+2896.053788925" Jan 28 19:01:26 crc kubenswrapper[4985]: I0128 19:01:26.303523 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:26 crc kubenswrapper[4985]: I0128 19:01:26.303839 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:27 crc kubenswrapper[4985]: I0128 19:01:27.366536 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-trpsd" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" probeResult="failure" output=< Jan 28 19:01:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:01:27 crc kubenswrapper[4985]: > Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.362013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.414862 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:36 crc kubenswrapper[4985]: I0128 19:01:36.604762 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.335983 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-trpsd" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" containerID="cri-o://c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" gracePeriod=2 Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.857722 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.942722 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.942790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.943011 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") pod \"d8975c23-346a-478b-b671-42564f301319\" (UID: \"d8975c23-346a-478b-b671-42564f301319\") " Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.943809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities" (OuterVolumeSpecName: "utilities") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.948337 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h" (OuterVolumeSpecName: "kube-api-access-gr46h") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "kube-api-access-gr46h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:01:38 crc kubenswrapper[4985]: I0128 19:01:38.999166 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d8975c23-346a-478b-b671-42564f301319" (UID: "d8975c23-346a-478b-b671-42564f301319"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045496 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr46h\" (UniqueName: \"kubernetes.io/projected/d8975c23-346a-478b-b671-42564f301319-kube-api-access-gr46h\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045526 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.045535 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d8975c23-346a-478b-b671-42564f301319-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350381 4985 generic.go:334] "Generic (PLEG): container finished" podID="d8975c23-346a-478b-b671-42564f301319" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" exitCode=0 Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350431 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-trpsd" event={"ID":"d8975c23-346a-478b-b671-42564f301319","Type":"ContainerDied","Data":"faaa09621c13f7869b093d973f48110c892e2f5b743c15f112d4392d8754104e"} Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350471 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-trpsd" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.350490 4985 scope.go:117] "RemoveContainer" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.382574 4985 scope.go:117] "RemoveContainer" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.393133 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.406447 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-trpsd"] Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.421821 4985 scope.go:117] "RemoveContainer" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.490870 4985 scope.go:117] "RemoveContainer" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.491763 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": container with ID starting with c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67 not found: ID does not exist" containerID="c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.491791 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67"} err="failed to get container status \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": rpc error: code = NotFound desc = could not find container \"c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67\": container with ID starting with c5a798e76c8796578e76d873f543848e1058a06552b59d4541aa4758cf744d67 not found: ID does not exist" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.491818 4985 scope.go:117] "RemoveContainer" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.492303 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": container with ID starting with 84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640 not found: ID does not exist" containerID="84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492372 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640"} err="failed to get container status \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": rpc error: code = NotFound desc = could not find container \"84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640\": container with ID starting with 84a51fc31812baede64e155c046cfa865cc864eed03e73248352aebe5eddb640 not found: ID does not exist" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492424 4985 scope.go:117] "RemoveContainer" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: E0128 19:01:39.492876 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": container with ID starting with 3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e not found: ID does not exist" containerID="3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e" Jan 28 19:01:39 crc kubenswrapper[4985]: I0128 19:01:39.492939 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e"} err="failed to get container status \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": rpc error: code = NotFound desc = could not find container \"3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e\": container with ID starting with 3bdf3ee74d2ac17e6855c926ab8d5cc5a44c8f639bcb0b83adcfca19a97f3e6e not found: ID does not exist" Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.187097 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.187387 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:01:41 crc kubenswrapper[4985]: I0128 19:01:41.275880 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8975c23-346a-478b-b671-42564f301319" path="/var/lib/kubelet/pods/d8975c23-346a-478b-b671-42564f301319/volumes" Jan 28 19:01:59 crc kubenswrapper[4985]: I0128 19:01:59.574628 4985 generic.go:334] "Generic (PLEG): container finished" podID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerID="0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e" exitCode=0 Jan 28 19:01:59 crc kubenswrapper[4985]: I0128 19:01:59.574971 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerDied","Data":"0b6a7ce57d1549ccd7fcb1e692f7f4ffc2788f4699e60c9a7fdd7e7e4ae4777e"} Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.099974 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109006 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109054 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109103 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109125 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109173 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109199 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109239 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.109360 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") pod \"b129af39-361b-4dba-bdbb-31531c3a2ce9\" (UID: \"b129af39-361b-4dba-bdbb-31531c3a2ce9\") " Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.121156 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.139071 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc" (OuterVolumeSpecName: "kube-api-access-mt5sc") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "kube-api-access-mt5sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.181726 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.184511 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.186893 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.189193 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory" (OuterVolumeSpecName: "inventory") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.201432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.206822 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.216641 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "b129af39-361b-4dba-bdbb-31531c3a2ce9" (UID: "b129af39-361b-4dba-bdbb-31531c3a2ce9"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218129 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218306 4985 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218384 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mt5sc\" (UniqueName: \"kubernetes.io/projected/b129af39-361b-4dba-bdbb-31531c3a2ce9-kube-api-access-mt5sc\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218466 4985 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218541 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218655 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218730 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218799 4985 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.218871 4985 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b129af39-361b-4dba-bdbb-31531c3a2ce9-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596041 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" event={"ID":"b129af39-361b-4dba-bdbb-31531c3a2ce9","Type":"ContainerDied","Data":"3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c"} Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596389 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b594d4eee4b54c3372cab8ba60d4c1ef200410a74a95e06ae052c59e590055c" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.596138 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-68wk4" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708229 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708811 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708839 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708866 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-content" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708874 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-content" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708889 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-utilities" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708896 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="extract-utilities" Jan 28 19:02:01 crc kubenswrapper[4985]: E0128 19:02:01.708932 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.708941 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.709235 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b129af39-361b-4dba-bdbb-31531c3a2ce9" containerName="nova-edpm-deployment-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.709305 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8975c23-346a-478b-b671-42564f301319" containerName="registry-server" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.710333 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.713657 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.713930 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714198 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714528 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.714701 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.764499 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766181 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766405 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766502 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766554 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766833 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.766888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869223 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869323 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869396 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869452 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869492 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869566 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.869604 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.873628 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.873845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.874164 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875028 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875126 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.875503 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:01 crc kubenswrapper[4985]: I0128 19:02:01.899982 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-lhknq\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:02 crc kubenswrapper[4985]: I0128 19:02:02.072238 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:02:02 crc kubenswrapper[4985]: I0128 19:02:02.652648 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq"] Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.626631 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerStarted","Data":"8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761"} Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.627900 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerStarted","Data":"5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3"} Jan 28 19:02:03 crc kubenswrapper[4985]: I0128 19:02:03.653392 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" podStartSLOduration=2.24544909 podStartE2EDuration="2.653367426s" podCreationTimestamp="2026-01-28 19:02:01 +0000 UTC" firstStartedPulling="2026-01-28 19:02:02.661552888 +0000 UTC m=+2933.488115709" lastFinishedPulling="2026-01-28 19:02:03.069471224 +0000 UTC m=+2933.896034045" observedRunningTime="2026-01-28 19:02:03.652548443 +0000 UTC m=+2934.479111264" watchObservedRunningTime="2026-01-28 19:02:03.653367426 +0000 UTC m=+2934.479930257" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.185828 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.186411 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.186461 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.187337 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.187395 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" gracePeriod=600 Jan 28 19:02:11 crc kubenswrapper[4985]: E0128 19:02:11.318375 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.721948 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" exitCode=0 Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.722011 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb"} Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.722397 4985 scope.go:117] "RemoveContainer" containerID="5a8c9d2caebf9577d32e5d0f94fe2ab9bc2dff20b5b793ce82c0ec429e6181e4" Jan 28 19:02:11 crc kubenswrapper[4985]: I0128 19:02:11.723682 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:11 crc kubenswrapper[4985]: E0128 19:02:11.726978 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.191297 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.193688 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.213687 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241584 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241645 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.241778 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344056 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344648 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.344801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.345080 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.345245 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.386456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"community-operators-fqckw\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:12 crc kubenswrapper[4985]: I0128 19:02:12.512478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.125996 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.775884 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" exitCode=0 Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.775966 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df"} Jan 28 19:02:13 crc kubenswrapper[4985]: I0128 19:02:13.776175 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"d929c4c8bbc677706ab10198545032bdd49d95e33281d5782ef5fb53e383b1ef"} Jan 28 19:02:16 crc kubenswrapper[4985]: I0128 19:02:16.814802 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} Jan 28 19:02:21 crc kubenswrapper[4985]: I0128 19:02:21.880038 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" exitCode=0 Jan 28 19:02:21 crc kubenswrapper[4985]: I0128 19:02:21.880121 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} Jan 28 19:02:27 crc kubenswrapper[4985]: I0128 19:02:27.266025 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:27 crc kubenswrapper[4985]: E0128 19:02:27.266887 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:28 crc kubenswrapper[4985]: I0128 19:02:28.035742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerStarted","Data":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} Jan 28 19:02:28 crc kubenswrapper[4985]: I0128 19:02:28.065762 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fqckw" podStartSLOduration=2.225900421 podStartE2EDuration="16.065739976s" podCreationTimestamp="2026-01-28 19:02:12 +0000 UTC" firstStartedPulling="2026-01-28 19:02:13.779909128 +0000 UTC m=+2944.606471949" lastFinishedPulling="2026-01-28 19:02:27.619748683 +0000 UTC m=+2958.446311504" observedRunningTime="2026-01-28 19:02:28.054531989 +0000 UTC m=+2958.881094820" watchObservedRunningTime="2026-01-28 19:02:28.065739976 +0000 UTC m=+2958.892302797" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.512616 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.513129 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:32 crc kubenswrapper[4985]: I0128 19:02:32.570043 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:33 crc kubenswrapper[4985]: I0128 19:02:33.125582 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:33 crc kubenswrapper[4985]: I0128 19:02:33.175724 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.097448 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fqckw" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" containerID="cri-o://357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" gracePeriod=2 Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.608340 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672457 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672564 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.672905 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") pod \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\" (UID: \"a0c408a3-7c9d-4083-8497-0d63e85a2e75\") " Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.673929 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities" (OuterVolumeSpecName: "utilities") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.680146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7" (OuterVolumeSpecName: "kube-api-access-cppn7") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "kube-api-access-cppn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.731980 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a0c408a3-7c9d-4083-8497-0d63e85a2e75" (UID: "a0c408a3-7c9d-4083-8497-0d63e85a2e75"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776407 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776674 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cppn7\" (UniqueName: \"kubernetes.io/projected/a0c408a3-7c9d-4083-8497-0d63e85a2e75-kube-api-access-cppn7\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:35 crc kubenswrapper[4985]: I0128 19:02:35.776762 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a0c408a3-7c9d-4083-8497-0d63e85a2e75-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.108985 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" exitCode=0 Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109073 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fqckw" event={"ID":"a0c408a3-7c9d-4083-8497-0d63e85a2e75","Type":"ContainerDied","Data":"d929c4c8bbc677706ab10198545032bdd49d95e33281d5782ef5fb53e383b1ef"} Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109091 4985 scope.go:117] "RemoveContainer" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.109230 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fqckw" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.137716 4985 scope.go:117] "RemoveContainer" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.162808 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.172181 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fqckw"] Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.189820 4985 scope.go:117] "RemoveContainer" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.234592 4985 scope.go:117] "RemoveContainer" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.235024 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": container with ID starting with 357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a not found: ID does not exist" containerID="357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235066 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a"} err="failed to get container status \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": rpc error: code = NotFound desc = could not find container \"357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a\": container with ID starting with 357ae65edf1d74f4e393c4c471a25d089d5133f9891ca7afe3a5ef5ad1f7424a not found: ID does not exist" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235096 4985 scope.go:117] "RemoveContainer" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.235385 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": container with ID starting with 154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0 not found: ID does not exist" containerID="154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235406 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0"} err="failed to get container status \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": rpc error: code = NotFound desc = could not find container \"154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0\": container with ID starting with 154c24c17344bf5b412ca685947fda989703029ad6c54b792326ccd17c09dcd0 not found: ID does not exist" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.235423 4985 scope.go:117] "RemoveContainer" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: E0128 19:02:36.236033 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": container with ID starting with 141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df not found: ID does not exist" containerID="141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df" Jan 28 19:02:36 crc kubenswrapper[4985]: I0128 19:02:36.236053 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df"} err="failed to get container status \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": rpc error: code = NotFound desc = could not find container \"141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df\": container with ID starting with 141010ecc8b006f33d7f415a573e8c8e4c33db5a9007be31c6d63bb1948563df not found: ID does not exist" Jan 28 19:02:37 crc kubenswrapper[4985]: I0128 19:02:37.277420 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" path="/var/lib/kubelet/pods/a0c408a3-7c9d-4083-8497-0d63e85a2e75/volumes" Jan 28 19:02:38 crc kubenswrapper[4985]: I0128 19:02:38.264419 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:38 crc kubenswrapper[4985]: E0128 19:02:38.265042 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:02:53 crc kubenswrapper[4985]: I0128 19:02:53.264214 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:02:53 crc kubenswrapper[4985]: E0128 19:02:53.265370 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:05 crc kubenswrapper[4985]: I0128 19:03:05.264384 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:05 crc kubenswrapper[4985]: E0128 19:03:05.265177 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:20 crc kubenswrapper[4985]: I0128 19:03:20.264382 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:20 crc kubenswrapper[4985]: E0128 19:03:20.265316 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:31 crc kubenswrapper[4985]: I0128 19:03:31.276512 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:31 crc kubenswrapper[4985]: E0128 19:03:31.277169 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:44 crc kubenswrapper[4985]: I0128 19:03:44.263840 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:44 crc kubenswrapper[4985]: E0128 19:03:44.264639 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:03:56 crc kubenswrapper[4985]: I0128 19:03:56.264119 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:03:56 crc kubenswrapper[4985]: E0128 19:03:56.264890 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:07 crc kubenswrapper[4985]: I0128 19:04:07.264681 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:07 crc kubenswrapper[4985]: E0128 19:04:07.265566 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:21 crc kubenswrapper[4985]: I0128 19:04:21.271216 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:21 crc kubenswrapper[4985]: E0128 19:04:21.272074 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:27 crc kubenswrapper[4985]: I0128 19:04:27.318522 4985 generic.go:334] "Generic (PLEG): container finished" podID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerID="8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761" exitCode=0 Jan 28 19:04:27 crc kubenswrapper[4985]: I0128 19:04:27.318616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerDied","Data":"8b17231f5ddf8a4fcdf6edbbf7bfe5301dfb0efab463adfdf2cac11011e5b761"} Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.839466 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957291 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957400 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957467 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957511 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957570 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957687 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.957715 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") pod \"557f8a1e-1a37-47a3-aa41-7222181ea137\" (UID: \"557f8a1e-1a37-47a3-aa41-7222181ea137\") " Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.963684 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.968443 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d" (OuterVolumeSpecName: "kube-api-access-gz68d") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "kube-api-access-gz68d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.995529 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory" (OuterVolumeSpecName: "inventory") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:28 crc kubenswrapper[4985]: I0128 19:04:28.998397 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.002727 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.002765 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.003820 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "557f8a1e-1a37-47a3-aa41-7222181ea137" (UID: "557f8a1e-1a37-47a3-aa41-7222181ea137"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060626 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060662 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060675 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gz68d\" (UniqueName: \"kubernetes.io/projected/557f8a1e-1a37-47a3-aa41-7222181ea137-kube-api-access-gz68d\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060685 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060696 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060704 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.060713 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/557f8a1e-1a37-47a3-aa41-7222181ea137-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340191 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" event={"ID":"557f8a1e-1a37-47a3-aa41-7222181ea137","Type":"ContainerDied","Data":"5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3"} Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340227 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c9ce223c2123209d1a3f6e2f8a810235bf69b9d7616933eef101635da4de2e3" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.340286 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-lhknq" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454412 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454899 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454916 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454951 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-utilities" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454957 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-utilities" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454965 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: E0128 19:04:29.454983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-content" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.454988 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="extract-content" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.455224 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="557f8a1e-1a37-47a3-aa41-7222181ea137" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.455262 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a0c408a3-7c9d-4083-8497-0d63e85a2e75" containerName="registry-server" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.456080 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461657 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461804 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-ipmi-config-data" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461884 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.461899 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.462126 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.471961 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572768 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572817 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572858 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572888 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.572990 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.573033 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.674829 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675083 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675129 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675168 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675205 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675239 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.675465 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.686899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.687135 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.688047 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.689341 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.692402 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.703174 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.703667 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:29 crc kubenswrapper[4985]: I0128 19:04:29.780595 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:04:30 crc kubenswrapper[4985]: I0128 19:04:30.352832 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:04:30 crc kubenswrapper[4985]: I0128 19:04:30.354602 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls"] Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.367991 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerStarted","Data":"8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10"} Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.368359 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerStarted","Data":"b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a"} Jan 28 19:04:31 crc kubenswrapper[4985]: I0128 19:04:31.392673 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" podStartSLOduration=1.965105011 podStartE2EDuration="2.392655982s" podCreationTimestamp="2026-01-28 19:04:29 +0000 UTC" firstStartedPulling="2026-01-28 19:04:30.352513706 +0000 UTC m=+3081.179076527" lastFinishedPulling="2026-01-28 19:04:30.780064677 +0000 UTC m=+3081.606627498" observedRunningTime="2026-01-28 19:04:31.387808824 +0000 UTC m=+3082.214371655" watchObservedRunningTime="2026-01-28 19:04:31.392655982 +0000 UTC m=+3082.219218803" Jan 28 19:04:33 crc kubenswrapper[4985]: I0128 19:04:33.264443 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:33 crc kubenswrapper[4985]: E0128 19:04:33.264977 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:04:48 crc kubenswrapper[4985]: I0128 19:04:48.264564 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:04:48 crc kubenswrapper[4985]: E0128 19:04:48.266850 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:01 crc kubenswrapper[4985]: I0128 19:05:01.273239 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:01 crc kubenswrapper[4985]: E0128 19:05:01.274126 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:15 crc kubenswrapper[4985]: I0128 19:05:15.264493 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:15 crc kubenswrapper[4985]: E0128 19:05:15.265525 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:27 crc kubenswrapper[4985]: I0128 19:05:27.264947 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:27 crc kubenswrapper[4985]: E0128 19:05:27.265819 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:39 crc kubenswrapper[4985]: I0128 19:05:39.265134 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:39 crc kubenswrapper[4985]: E0128 19:05:39.266225 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:05:54 crc kubenswrapper[4985]: I0128 19:05:54.265597 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:05:54 crc kubenswrapper[4985]: E0128 19:05:54.266813 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:05 crc kubenswrapper[4985]: I0128 19:06:05.265219 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:05 crc kubenswrapper[4985]: E0128 19:06:05.265989 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:16 crc kubenswrapper[4985]: I0128 19:06:16.264615 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:16 crc kubenswrapper[4985]: E0128 19:06:16.265522 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:23 crc kubenswrapper[4985]: I0128 19:06:23.632166 4985 generic.go:334] "Generic (PLEG): container finished" podID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerID="8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10" exitCode=0 Jan 28 19:06:23 crc kubenswrapper[4985]: I0128 19:06:23.632859 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerDied","Data":"8ea8fcb948c015ea73698aa70b25889e81199d3f1076b232700b8bb7c130da10"} Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.158609 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264298 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264655 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264703 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264783 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.264953 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.265056 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.265085 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") pod \"d9d4a4e3-9f29-45a2-9748-d133f122af06\" (UID: \"d9d4a4e3-9f29-45a2-9748-d133f122af06\") " Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.270265 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-power-monitoring-combined-ca-bundle") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "telemetry-power-monitoring-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.272890 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf" (OuterVolumeSpecName: "kube-api-access-wxxnf") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "kube-api-access-wxxnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.298790 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-1") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306287 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-0") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306345 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2" (OuterVolumeSpecName: "ceilometer-ipmi-config-data-2") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ceilometer-ipmi-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.306422 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.312400 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory" (OuterVolumeSpecName: "inventory") pod "d9d4a4e3-9f29-45a2-9748-d133f122af06" (UID: "d9d4a4e3-9f29-45a2-9748-d133f122af06"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378769 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-2\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-2\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378795 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-1\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378807 4985 reconciler_common.go:293] "Volume detached for volume \"ceilometer-ipmi-config-data-0\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ceilometer-ipmi-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378816 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxxnf\" (UniqueName: \"kubernetes.io/projected/d9d4a4e3-9f29-45a2-9748-d133f122af06-kube-api-access-wxxnf\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378824 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378834 4985 reconciler_common.go:293] "Volume detached for volume \"telemetry-power-monitoring-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-telemetry-power-monitoring-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.378842 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d9d4a4e3-9f29-45a2-9748-d133f122af06-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" event={"ID":"d9d4a4e3-9f29-45a2-9748-d133f122af06","Type":"ContainerDied","Data":"b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a"} Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657794 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b199812b5cc9cf5d92c4b1353a88e7f0beb570cf77c1f9a72103035686c3c51a" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.657810 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759011 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:25 crc kubenswrapper[4985]: E0128 19:06:25.759563 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759579 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.759838 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9d4a4e3-9f29-45a2-9748-d133f122af06" containerName="telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.760628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768086 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"logging-compute-config-data" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768130 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768845 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.768926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.769061 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-jvtzh" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786687 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786944 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.786998 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.787140 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.787224 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.788974 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888426 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888702 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888801 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.888931 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.889049 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.894864 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.894899 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.895093 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.895453 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:25 crc kubenswrapper[4985]: I0128 19:06:25.905656 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"logging-edpm-deployment-openstack-edpm-ipam-wn6r7\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:26 crc kubenswrapper[4985]: I0128 19:06:26.086083 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:26 crc kubenswrapper[4985]: I0128 19:06:26.669570 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7"] Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.723572 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerStarted","Data":"eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4"} Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.725012 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerStarted","Data":"23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6"} Jan 28 19:06:27 crc kubenswrapper[4985]: I0128 19:06:27.753462 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" podStartSLOduration=2.192102215 podStartE2EDuration="2.753442121s" podCreationTimestamp="2026-01-28 19:06:25 +0000 UTC" firstStartedPulling="2026-01-28 19:06:26.668346863 +0000 UTC m=+3197.494909684" lastFinishedPulling="2026-01-28 19:06:27.229686769 +0000 UTC m=+3198.056249590" observedRunningTime="2026-01-28 19:06:27.747789481 +0000 UTC m=+3198.574352322" watchObservedRunningTime="2026-01-28 19:06:27.753442121 +0000 UTC m=+3198.580004942" Jan 28 19:06:29 crc kubenswrapper[4985]: I0128 19:06:29.265290 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:29 crc kubenswrapper[4985]: E0128 19:06:29.265637 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:41 crc kubenswrapper[4985]: I0128 19:06:41.869539 4985 generic.go:334] "Generic (PLEG): container finished" podID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerID="eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4" exitCode=0 Jan 28 19:06:41 crc kubenswrapper[4985]: I0128 19:06:41.869742 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerDied","Data":"eebc1fab3fbe6e3bc4d99333108d03286bc86771600f6891902f829e592cdfc4"} Jan 28 19:06:42 crc kubenswrapper[4985]: I0128 19:06:42.264432 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:42 crc kubenswrapper[4985]: E0128 19:06:42.264825 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.367080 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410872 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.410960 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.411119 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.411320 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") pod \"c6c90c6c-aa78-4215-9c43-acd22891abfb\" (UID: \"c6c90c6c-aa78-4215-9c43-acd22891abfb\") " Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.419047 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr" (OuterVolumeSpecName: "kube-api-access-9tmdr") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "kube-api-access-9tmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.455197 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0" (OuterVolumeSpecName: "logging-compute-config-data-0") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "logging-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.459538 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.460324 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory" (OuterVolumeSpecName: "inventory") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.463433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1" (OuterVolumeSpecName: "logging-compute-config-data-1") pod "c6c90c6c-aa78-4215-9c43-acd22891abfb" (UID: "c6c90c6c-aa78-4215-9c43-acd22891abfb"). InnerVolumeSpecName "logging-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514230 4985 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-inventory\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514283 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9tmdr\" (UniqueName: \"kubernetes.io/projected/c6c90c6c-aa78-4215-9c43-acd22891abfb-kube-api-access-9tmdr\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514297 4985 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514307 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.514317 4985 reconciler_common.go:293] "Volume detached for volume \"logging-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/c6c90c6c-aa78-4215-9c43-acd22891abfb-logging-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" event={"ID":"c6c90c6c-aa78-4215-9c43-acd22891abfb","Type":"ContainerDied","Data":"23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6"} Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895351 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23310972a28ed4e2f0fa6d03c0061ee3ae2e74f087c158d1a566307e4d2f53b6" Jan 28 19:06:43 crc kubenswrapper[4985]: I0128 19:06:43.895386 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/logging-edpm-deployment-openstack-edpm-ipam-wn6r7" Jan 28 19:06:55 crc kubenswrapper[4985]: I0128 19:06:55.264614 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:06:55 crc kubenswrapper[4985]: E0128 19:06:55.265695 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:07:08 crc kubenswrapper[4985]: I0128 19:07:08.265160 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:07:08 crc kubenswrapper[4985]: E0128 19:07:08.266192 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:07:22 crc kubenswrapper[4985]: I0128 19:07:22.265907 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:07:23 crc kubenswrapper[4985]: I0128 19:07:23.353099 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.276288 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:20 crc kubenswrapper[4985]: E0128 19:08:20.277425 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.277443 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.277697 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c90c6c-aa78-4215-9c43-acd22891abfb" containerName="logging-edpm-deployment-openstack-edpm-ipam" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.279907 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.286868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410334 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410535 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.410689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513557 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513640 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.513782 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.514214 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.514269 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.534227 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"redhat-operators-vhxbr\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:20 crc kubenswrapper[4985]: I0128 19:08:20.611911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:21 crc kubenswrapper[4985]: I0128 19:08:21.112778 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042115 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" exitCode=0 Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042232 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070"} Jan 28 19:08:22 crc kubenswrapper[4985]: I0128 19:08:22.042489 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e"} Jan 28 19:08:24 crc kubenswrapper[4985]: I0128 19:08:24.067891 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} Jan 28 19:08:28 crc kubenswrapper[4985]: I0128 19:08:28.114946 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" exitCode=0 Jan 28 19:08:28 crc kubenswrapper[4985]: I0128 19:08:28.115024 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.139335 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerStarted","Data":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.160299 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vhxbr" podStartSLOduration=3.107878626 podStartE2EDuration="10.16027622s" podCreationTimestamp="2026-01-28 19:08:20 +0000 UTC" firstStartedPulling="2026-01-28 19:08:22.045467532 +0000 UTC m=+3312.872030353" lastFinishedPulling="2026-01-28 19:08:29.097865126 +0000 UTC m=+3319.924427947" observedRunningTime="2026-01-28 19:08:30.156986287 +0000 UTC m=+3320.983549128" watchObservedRunningTime="2026-01-28 19:08:30.16027622 +0000 UTC m=+3320.986839041" Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.612463 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:30 crc kubenswrapper[4985]: I0128 19:08:30.612873 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:31 crc kubenswrapper[4985]: I0128 19:08:31.668194 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vhxbr" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" probeResult="failure" output=< Jan 28 19:08:31 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:08:31 crc kubenswrapper[4985]: > Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.665575 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.731092 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:40 crc kubenswrapper[4985]: I0128 19:08:40.913778 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.271780 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vhxbr" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" containerID="cri-o://a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" gracePeriod=2 Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.804145 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.981500 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.981936 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.982139 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") pod \"103d61a7-b2c1-4122-845a-e63c994c8946\" (UID: \"103d61a7-b2c1-4122-845a-e63c994c8946\") " Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.984042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities" (OuterVolumeSpecName: "utilities") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:42 crc kubenswrapper[4985]: I0128 19:08:42.990661 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh" (OuterVolumeSpecName: "kube-api-access-68ckh") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "kube-api-access-68ckh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.086398 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-68ckh\" (UniqueName: \"kubernetes.io/projected/103d61a7-b2c1-4122-845a-e63c994c8946-kube-api-access-68ckh\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.086446 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.136264 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "103d61a7-b2c1-4122-845a-e63c994c8946" (UID: "103d61a7-b2c1-4122-845a-e63c994c8946"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.190111 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/103d61a7-b2c1-4122-845a-e63c994c8946-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297688 4985 generic.go:334] "Generic (PLEG): container finished" podID="103d61a7-b2c1-4122-845a-e63c994c8946" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" exitCode=0 Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297732 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297759 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vhxbr" event={"ID":"103d61a7-b2c1-4122-845a-e63c994c8946","Type":"ContainerDied","Data":"8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e"} Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297760 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vhxbr" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.297777 4985 scope.go:117] "RemoveContainer" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.339778 4985 scope.go:117] "RemoveContainer" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.341530 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.367414 4985 scope.go:117] "RemoveContainer" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.388889 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vhxbr"] Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.415362 4985 scope.go:117] "RemoveContainer" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.416686 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": container with ID starting with a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8 not found: ID does not exist" containerID="a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.416829 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8"} err="failed to get container status \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": rpc error: code = NotFound desc = could not find container \"a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8\": container with ID starting with a16fba66e4d27f16c05faedd7b621c4cd960d676eadab971959dfb61a6ad05c8 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.416947 4985 scope.go:117] "RemoveContainer" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.420872 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": container with ID starting with eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488 not found: ID does not exist" containerID="eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.426549 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488"} err="failed to get container status \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": rpc error: code = NotFound desc = could not find container \"eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488\": container with ID starting with eba59d91d2b845742e89ed73c709b8ec58165c549f978680c781c98cfb7fc488 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.426615 4985 scope.go:117] "RemoveContainer" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.427242 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": container with ID starting with 152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070 not found: ID does not exist" containerID="152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070" Jan 28 19:08:43 crc kubenswrapper[4985]: I0128 19:08:43.427681 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070"} err="failed to get container status \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": rpc error: code = NotFound desc = could not find container \"152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070\": container with ID starting with 152fb7765d69f5d86c88754aa771f9ca7800fc1f84dd7ab261d39b2d08e88070 not found: ID does not exist" Jan 28 19:08:43 crc kubenswrapper[4985]: E0128 19:08:43.580622 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103d61a7_b2c1_4122_845a_e63c994c8946.slice/crio-8ca1e66c758ccac6692df31c7cc94b8051c203fd6964bbf5f1d0f882e2c52e2e\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod103d61a7_b2c1_4122_845a_e63c994c8946.slice\": RecentStats: unable to find data in memory cache]" Jan 28 19:08:45 crc kubenswrapper[4985]: I0128 19:08:45.280239 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" path="/var/lib/kubelet/pods/103d61a7-b2c1-4122-845a-e63c994c8946/volumes" Jan 28 19:09:41 crc kubenswrapper[4985]: I0128 19:09:41.186388 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:09:41 crc kubenswrapper[4985]: I0128 19:09:41.186890 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:11 crc kubenswrapper[4985]: I0128 19:10:11.185740 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:11 crc kubenswrapper[4985]: I0128 19:10:11.187929 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186211 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186886 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.186943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.188010 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.188086 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" gracePeriod=600 Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675426 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" exitCode=0 Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675483 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6"} Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675705 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} Jan 28 19:10:41 crc kubenswrapper[4985]: I0128 19:10:41.675739 4985 scope.go:117] "RemoveContainer" containerID="b50b8019ee13628eda557fba70aceebaa9c5e208a5912f5329da373ecd4888bb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.804582 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805878 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805898 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-utilities" Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805915 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805924 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="extract-content" Jan 28 19:11:02 crc kubenswrapper[4985]: E0128 19:11:02.805939 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.805947 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.806244 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="103d61a7-b2c1-4122-845a-e63c994c8946" containerName="registry-server" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.808262 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.817192 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951057 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:02 crc kubenswrapper[4985]: I0128 19:11:02.951225 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054194 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054324 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054449 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054747 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.054815 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.076238 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"redhat-marketplace-kqksb\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.139119 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.735909 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.954647 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} Jan 28 19:11:03 crc kubenswrapper[4985]: I0128 19:11:03.954703 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"5e7a0066a54de8d6e4d60ae7e1974a56dabafdfacb5cba38824d8a6aa776b194"} Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.966686 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" exitCode=0 Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.967233 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} Jan 28 19:11:04 crc kubenswrapper[4985]: I0128 19:11:04.970170 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:11:07 crc kubenswrapper[4985]: I0128 19:11:07.001126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} Jan 28 19:11:08 crc kubenswrapper[4985]: I0128 19:11:08.016608 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" exitCode=0 Jan 28 19:11:08 crc kubenswrapper[4985]: I0128 19:11:08.016690 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} Jan 28 19:11:09 crc kubenswrapper[4985]: I0128 19:11:09.032125 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerStarted","Data":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} Jan 28 19:11:09 crc kubenswrapper[4985]: I0128 19:11:09.056407 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-kqksb" podStartSLOduration=3.610566189 podStartE2EDuration="7.056389532s" podCreationTimestamp="2026-01-28 19:11:02 +0000 UTC" firstStartedPulling="2026-01-28 19:11:04.969868944 +0000 UTC m=+3475.796431765" lastFinishedPulling="2026-01-28 19:11:08.415692277 +0000 UTC m=+3479.242255108" observedRunningTime="2026-01-28 19:11:09.051978647 +0000 UTC m=+3479.878541468" watchObservedRunningTime="2026-01-28 19:11:09.056389532 +0000 UTC m=+3479.882952353" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.140325 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.140979 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:13 crc kubenswrapper[4985]: I0128 19:11:13.190550 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:14 crc kubenswrapper[4985]: I0128 19:11:14.140160 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:14 crc kubenswrapper[4985]: I0128 19:11:14.220084 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.117040 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-kqksb" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" containerID="cri-o://0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" gracePeriod=2 Jan 28 19:11:16 crc kubenswrapper[4985]: E0128 19:11:16.291330 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podedd68953_5617_46ec_8c09_7189d7dfab9a.slice/crio-0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.695506 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815554 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815622 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.815693 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") pod \"edd68953-5617-46ec-8c09-7189d7dfab9a\" (UID: \"edd68953-5617-46ec-8c09-7189d7dfab9a\") " Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.817781 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities" (OuterVolumeSpecName: "utilities") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.831756 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs" (OuterVolumeSpecName: "kube-api-access-fpvfs") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "kube-api-access-fpvfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.839502 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "edd68953-5617-46ec-8c09-7189d7dfab9a" (UID: "edd68953-5617-46ec-8c09-7189d7dfab9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919044 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpvfs\" (UniqueName: \"kubernetes.io/projected/edd68953-5617-46ec-8c09-7189d7dfab9a-kube-api-access-fpvfs\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919131 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:16 crc kubenswrapper[4985]: I0128 19:11:16.919142 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/edd68953-5617-46ec-8c09-7189d7dfab9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129474 4985 generic.go:334] "Generic (PLEG): container finished" podID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" exitCode=0 Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-kqksb" event={"ID":"edd68953-5617-46ec-8c09-7189d7dfab9a","Type":"ContainerDied","Data":"5e7a0066a54de8d6e4d60ae7e1974a56dabafdfacb5cba38824d8a6aa776b194"} Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.129656 4985 scope.go:117] "RemoveContainer" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.130909 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-kqksb" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.174734 4985 scope.go:117] "RemoveContainer" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.209931 4985 scope.go:117] "RemoveContainer" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.215425 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.234025 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-kqksb"] Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265087 4985 scope.go:117] "RemoveContainer" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.265516 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": container with ID starting with 0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d not found: ID does not exist" containerID="0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265547 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d"} err="failed to get container status \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": rpc error: code = NotFound desc = could not find container \"0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d\": container with ID starting with 0cf92421b4bb7bf9a3683faf758b88221b95e6971414e900a2b2300c5eac107d not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265563 4985 scope.go:117] "RemoveContainer" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.265908 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": container with ID starting with a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30 not found: ID does not exist" containerID="a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265944 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30"} err="failed to get container status \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": rpc error: code = NotFound desc = could not find container \"a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30\": container with ID starting with a2fa191fe8d6b9e7ea68f2af5db73d7b0bcdeab9cdce35173621dd3b5924af30 not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.265966 4985 scope.go:117] "RemoveContainer" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: E0128 19:11:17.266287 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": container with ID starting with 3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7 not found: ID does not exist" containerID="3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.266316 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7"} err="failed to get container status \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": rpc error: code = NotFound desc = could not find container \"3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7\": container with ID starting with 3dcf9006d41a0906b640d5e7fefb8f80c69d71de72bca6dbad07a077dbc09ee7 not found: ID does not exist" Jan 28 19:11:17 crc kubenswrapper[4985]: I0128 19:11:17.275611 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" path="/var/lib/kubelet/pods/edd68953-5617-46ec-8c09-7189d7dfab9a/volumes" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.524468 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527443 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-content" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527554 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-content" Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527652 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-utilities" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527753 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="extract-utilities" Jan 28 19:11:22 crc kubenswrapper[4985]: E0128 19:11:22.527876 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.527963 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.528399 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="edd68953-5617-46ec-8c09-7189d7dfab9a" containerName="registry-server" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.531114 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.548098 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572092 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572338 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.572620 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675015 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675162 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675264 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675664 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.675897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.695621 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"certified-operators-nmd4h\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:22 crc kubenswrapper[4985]: I0128 19:11:22.857733 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:23 crc kubenswrapper[4985]: I0128 19:11:23.416194 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235382 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" exitCode=0 Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235465 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab"} Jan 28 19:11:24 crc kubenswrapper[4985]: I0128 19:11:24.235696 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"cd5483b11db8f03e88cd6505a04e2d29146345183abc44446dd962fee7ea0233"} Jan 28 19:11:26 crc kubenswrapper[4985]: I0128 19:11:26.262957 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} Jan 28 19:11:29 crc kubenswrapper[4985]: I0128 19:11:29.299187 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" exitCode=0 Jan 28 19:11:29 crc kubenswrapper[4985]: I0128 19:11:29.299730 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} Jan 28 19:11:30 crc kubenswrapper[4985]: I0128 19:11:30.334030 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerStarted","Data":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} Jan 28 19:11:30 crc kubenswrapper[4985]: I0128 19:11:30.359160 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-nmd4h" podStartSLOduration=2.579191108 podStartE2EDuration="8.359138847s" podCreationTimestamp="2026-01-28 19:11:22 +0000 UTC" firstStartedPulling="2026-01-28 19:11:24.238033082 +0000 UTC m=+3495.064595903" lastFinishedPulling="2026-01-28 19:11:30.017980821 +0000 UTC m=+3500.844543642" observedRunningTime="2026-01-28 19:11:30.356865053 +0000 UTC m=+3501.183427874" watchObservedRunningTime="2026-01-28 19:11:30.359138847 +0000 UTC m=+3501.185701668" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.859652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.860025 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:32 crc kubenswrapper[4985]: I0128 19:11:32.917643 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:42 crc kubenswrapper[4985]: I0128 19:11:42.930720 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:43 crc kubenswrapper[4985]: I0128 19:11:43.016033 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:43 crc kubenswrapper[4985]: I0128 19:11:43.506355 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-nmd4h" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" containerID="cri-o://86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" gracePeriod=2 Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.005753 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137670 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137730 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.137801 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") pod \"effbec3a-d9f3-442b-8323-f1efe45da6e7\" (UID: \"effbec3a-d9f3-442b-8323-f1efe45da6e7\") " Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.138793 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities" (OuterVolumeSpecName: "utilities") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.144609 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f" (OuterVolumeSpecName: "kube-api-access-tct2f") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "kube-api-access-tct2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.190040 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "effbec3a-d9f3-442b-8323-f1efe45da6e7" (UID: "effbec3a-d9f3-442b-8323-f1efe45da6e7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240364 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240397 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/effbec3a-d9f3-442b-8323-f1efe45da6e7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.240407 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tct2f\" (UniqueName: \"kubernetes.io/projected/effbec3a-d9f3-442b-8323-f1efe45da6e7-kube-api-access-tct2f\") on node \"crc\" DevicePath \"\"" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519722 4985 generic.go:334] "Generic (PLEG): container finished" podID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" exitCode=0 Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519785 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-nmd4h" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.519804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.520325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-nmd4h" event={"ID":"effbec3a-d9f3-442b-8323-f1efe45da6e7","Type":"ContainerDied","Data":"cd5483b11db8f03e88cd6505a04e2d29146345183abc44446dd962fee7ea0233"} Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.520352 4985 scope.go:117] "RemoveContainer" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.545726 4985 scope.go:117] "RemoveContainer" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.567103 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.578295 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-nmd4h"] Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.585707 4985 scope.go:117] "RemoveContainer" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.634930 4985 scope.go:117] "RemoveContainer" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.635342 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": container with ID starting with 86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a not found: ID does not exist" containerID="86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635389 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a"} err="failed to get container status \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": rpc error: code = NotFound desc = could not find container \"86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a\": container with ID starting with 86d565c7cbec1e0f70ffc9f7e94ad2a3506cc0c5ab8738a7c74ef549a14be38a not found: ID does not exist" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635418 4985 scope.go:117] "RemoveContainer" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.635892 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": container with ID starting with e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7 not found: ID does not exist" containerID="e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635924 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7"} err="failed to get container status \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": rpc error: code = NotFound desc = could not find container \"e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7\": container with ID starting with e0c8737364299e40cb70149b46b262695df4d1cd1da57765277b48557a15f2c7 not found: ID does not exist" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.635941 4985 scope.go:117] "RemoveContainer" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: E0128 19:11:44.636122 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": container with ID starting with 2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab not found: ID does not exist" containerID="2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab" Jan 28 19:11:44 crc kubenswrapper[4985]: I0128 19:11:44.636148 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab"} err="failed to get container status \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": rpc error: code = NotFound desc = could not find container \"2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab\": container with ID starting with 2d817a0294a493cc5c48902bf7e692931dd8389258fd0efc10a24096288311ab not found: ID does not exist" Jan 28 19:11:45 crc kubenswrapper[4985]: I0128 19:11:45.299018 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" path="/var/lib/kubelet/pods/effbec3a-d9f3-442b-8323-f1efe45da6e7/volumes" Jan 28 19:12:41 crc kubenswrapper[4985]: I0128 19:12:41.185740 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:12:41 crc kubenswrapper[4985]: I0128 19:12:41.186199 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.186463 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.186987 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.957546 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958479 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-utilities" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958498 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-utilities" Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958523 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958531 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: E0128 19:13:11.958556 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-content" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="extract-content" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.958843 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="effbec3a-d9f3-442b-8323-f1efe45da6e7" containerName="registry-server" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.961318 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.976061 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981134 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981201 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:11 crc kubenswrapper[4985]: I0128 19:13:11.981295 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084414 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084667 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084792 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.084897 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.107281 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"community-operators-tkzhb\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.307872 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:12 crc kubenswrapper[4985]: I0128 19:13:12.861924 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631003 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" exitCode=0 Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631297 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca"} Jan 28 19:13:13 crc kubenswrapper[4985]: I0128 19:13:13.631778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"8886093d8e543f3fc13c31718f237c34c3af925dbaec60d5dddf203751ff3f82"} Jan 28 19:13:16 crc kubenswrapper[4985]: I0128 19:13:16.673338 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} Jan 28 19:13:16 crc kubenswrapper[4985]: E0128 19:13:16.916304 4985 upgradeaware.go:441] Error proxying data from backend to client: writeto tcp 38.102.83.195:46748->38.102.83.195:43365: read tcp 38.102.83.195:46748->38.102.83.195:43365: read: connection reset by peer Jan 28 19:13:19 crc kubenswrapper[4985]: E0128 19:13:19.085960 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950fa11d_42de_4bd7_87b2_f660e063c57f.slice/crio-conmon-21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950fa11d_42de_4bd7_87b2_f660e063c57f.slice/crio-21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:13:19 crc kubenswrapper[4985]: I0128 19:13:19.712835 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" exitCode=0 Jan 28 19:13:19 crc kubenswrapper[4985]: I0128 19:13:19.713186 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} Jan 28 19:13:20 crc kubenswrapper[4985]: I0128 19:13:20.729325 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerStarted","Data":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} Jan 28 19:13:20 crc kubenswrapper[4985]: I0128 19:13:20.758168 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tkzhb" podStartSLOduration=3.21449852 podStartE2EDuration="9.758143356s" podCreationTimestamp="2026-01-28 19:13:11 +0000 UTC" firstStartedPulling="2026-01-28 19:13:13.633696511 +0000 UTC m=+3604.460259332" lastFinishedPulling="2026-01-28 19:13:20.177341337 +0000 UTC m=+3611.003904168" observedRunningTime="2026-01-28 19:13:20.745931951 +0000 UTC m=+3611.572494782" watchObservedRunningTime="2026-01-28 19:13:20.758143356 +0000 UTC m=+3611.584706177" Jan 28 19:13:22 crc kubenswrapper[4985]: I0128 19:13:22.308485 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:22 crc kubenswrapper[4985]: I0128 19:13:22.309368 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:23 crc kubenswrapper[4985]: I0128 19:13:23.360058 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-tkzhb" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" probeResult="failure" output=< Jan 28 19:13:23 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:13:23 crc kubenswrapper[4985]: > Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.360410 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.417591 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:32 crc kubenswrapper[4985]: I0128 19:13:32.605570 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:33 crc kubenswrapper[4985]: I0128 19:13:33.898529 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tkzhb" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" containerID="cri-o://5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" gracePeriod=2 Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.431827 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442264 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442459 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.442503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") pod \"950fa11d-42de-4bd7-87b2-f660e063c57f\" (UID: \"950fa11d-42de-4bd7-87b2-f660e063c57f\") " Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.443158 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities" (OuterVolumeSpecName: "utilities") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.458216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj" (OuterVolumeSpecName: "kube-api-access-w5tsj") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "kube-api-access-w5tsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.517934 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "950fa11d-42de-4bd7-87b2-f660e063c57f" (UID: "950fa11d-42de-4bd7-87b2-f660e063c57f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545287 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545330 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/950fa11d-42de-4bd7-87b2-f660e063c57f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.545346 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w5tsj\" (UniqueName: \"kubernetes.io/projected/950fa11d-42de-4bd7-87b2-f660e063c57f-kube-api-access-w5tsj\") on node \"crc\" DevicePath \"\"" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911768 4985 generic.go:334] "Generic (PLEG): container finished" podID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" exitCode=0 Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911847 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tkzhb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.911868 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.912200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tkzhb" event={"ID":"950fa11d-42de-4bd7-87b2-f660e063c57f","Type":"ContainerDied","Data":"8886093d8e543f3fc13c31718f237c34c3af925dbaec60d5dddf203751ff3f82"} Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.912218 4985 scope.go:117] "RemoveContainer" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.932679 4985 scope.go:117] "RemoveContainer" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.967261 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.972075 4985 scope.go:117] "RemoveContainer" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:34 crc kubenswrapper[4985]: I0128 19:13:34.979943 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tkzhb"] Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.022698 4985 scope.go:117] "RemoveContainer" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.023329 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": container with ID starting with 5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb not found: ID does not exist" containerID="5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.023456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb"} err="failed to get container status \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": rpc error: code = NotFound desc = could not find container \"5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb\": container with ID starting with 5c80587d12b8c7f32c071450ed532d041bb2eb9d87697f13d594057fef385ceb not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.023558 4985 scope.go:117] "RemoveContainer" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.024044 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": container with ID starting with 21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116 not found: ID does not exist" containerID="21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024117 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116"} err="failed to get container status \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": rpc error: code = NotFound desc = could not find container \"21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116\": container with ID starting with 21f9640a1ab2bd2c268db83a1c2054ea3133c4af5e579540b8f1b85dcc637116 not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024147 4985 scope.go:117] "RemoveContainer" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:35 crc kubenswrapper[4985]: E0128 19:13:35.024507 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": container with ID starting with e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca not found: ID does not exist" containerID="e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.024628 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca"} err="failed to get container status \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": rpc error: code = NotFound desc = could not find container \"e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca\": container with ID starting with e583e8d3c979992a3d89b11923015cb0d98257411b23a86b7bf7cbf1fd037fca not found: ID does not exist" Jan 28 19:13:35 crc kubenswrapper[4985]: I0128 19:13:35.278682 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" path="/var/lib/kubelet/pods/950fa11d-42de-4bd7-87b2-f660e063c57f/volumes" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.185784 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.186296 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.186351 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.187167 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.187223 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" gracePeriod=600 Jan 28 19:13:41 crc kubenswrapper[4985]: E0128 19:13:41.321999 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988392 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" exitCode=0 Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988437 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf"} Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.988469 4985 scope.go:117] "RemoveContainer" containerID="a627b2b579e569c0b043d2fecf15b4dfaeb3f01422dbeb527c4e889676ab53e6" Jan 28 19:13:41 crc kubenswrapper[4985]: I0128 19:13:41.989261 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:13:41 crc kubenswrapper[4985]: E0128 19:13:41.989624 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:13:57 crc kubenswrapper[4985]: I0128 19:13:57.264588 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:13:57 crc kubenswrapper[4985]: E0128 19:13:57.265388 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:09 crc kubenswrapper[4985]: I0128 19:14:09.267126 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:09 crc kubenswrapper[4985]: E0128 19:14:09.267879 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:21 crc kubenswrapper[4985]: I0128 19:14:21.273153 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:21 crc kubenswrapper[4985]: E0128 19:14:21.274391 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:32 crc kubenswrapper[4985]: I0128 19:14:32.264566 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:32 crc kubenswrapper[4985]: E0128 19:14:32.265468 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:14:46 crc kubenswrapper[4985]: I0128 19:14:46.264397 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:14:46 crc kubenswrapper[4985]: E0128 19:14:46.265114 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.177959 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.181627 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-utilities" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.181880 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-utilities" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.181983 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182067 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.182206 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-content" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182315 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="extract-content" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.182885 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="950fa11d-42de-4bd7-87b2-f660e063c57f" containerName="registry-server" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.184338 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.188314 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.188931 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.190008 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.265738 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:00 crc kubenswrapper[4985]: E0128 19:15:00.267443 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301193 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301277 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.301612 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404403 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404473 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.404593 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.406211 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.415456 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.422078 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"collect-profiles-29493795-qh4k7\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:00 crc kubenswrapper[4985]: I0128 19:15:00.520330 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.010117 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.916704 4985 generic.go:334] "Generic (PLEG): container finished" podID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerID="338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0" exitCode=0 Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.916807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerDied","Data":"338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0"} Jan 28 19:15:01 crc kubenswrapper[4985]: I0128 19:15:01.917200 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerStarted","Data":"ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0"} Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.383468 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486514 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486629 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.486809 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") pod \"dc7f7054-2ff2-4045-aa35-4345b449dc70\" (UID: \"dc7f7054-2ff2-4045-aa35-4345b449dc70\") " Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.494395 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9" (OuterVolumeSpecName: "kube-api-access-k75l9") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "kube-api-access-k75l9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.494433 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.495643 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume" (OuterVolumeSpecName: "config-volume") pod "dc7f7054-2ff2-4045-aa35-4345b449dc70" (UID: "dc7f7054-2ff2-4045-aa35-4345b449dc70"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589232 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k75l9\" (UniqueName: \"kubernetes.io/projected/dc7f7054-2ff2-4045-aa35-4345b449dc70-kube-api-access-k75l9\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589271 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/dc7f7054-2ff2-4045-aa35-4345b449dc70-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.589280 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dc7f7054-2ff2-4045-aa35-4345b449dc70-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.946425 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" event={"ID":"dc7f7054-2ff2-4045-aa35-4345b449dc70","Type":"ContainerDied","Data":"ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0"} Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.947187 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ea047508cb623d2e90c208409d5cd0ff3b3af32c8bb319c49b3ee7fa83da9fe0" Jan 28 19:15:03 crc kubenswrapper[4985]: I0128 19:15:03.946815 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7" Jan 28 19:15:04 crc kubenswrapper[4985]: I0128 19:15:04.464873 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 19:15:04 crc kubenswrapper[4985]: I0128 19:15:04.475617 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493750-zsmmm"] Jan 28 19:15:05 crc kubenswrapper[4985]: I0128 19:15:05.282892 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfca2781-d8d0-4e7e-85c8-d337780059ae" path="/var/lib/kubelet/pods/dfca2781-d8d0-4e7e-85c8-d337780059ae/volumes" Jan 28 19:15:12 crc kubenswrapper[4985]: I0128 19:15:12.263990 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:12 crc kubenswrapper[4985]: E0128 19:15:12.264912 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:27 crc kubenswrapper[4985]: I0128 19:15:27.265229 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:27 crc kubenswrapper[4985]: E0128 19:15:27.266672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:38 crc kubenswrapper[4985]: I0128 19:15:38.264813 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:38 crc kubenswrapper[4985]: E0128 19:15:38.265643 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:15:42 crc kubenswrapper[4985]: I0128 19:15:42.264093 4985 scope.go:117] "RemoveContainer" containerID="0f1e952a6fa49b7083594207d25422769b2776c1aec196aa97dc536dd6123d3e" Jan 28 19:15:53 crc kubenswrapper[4985]: I0128 19:15:53.264326 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:15:53 crc kubenswrapper[4985]: E0128 19:15:53.265719 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:06 crc kubenswrapper[4985]: I0128 19:16:06.264880 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:06 crc kubenswrapper[4985]: E0128 19:16:06.266327 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:21 crc kubenswrapper[4985]: I0128 19:16:21.273584 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:21 crc kubenswrapper[4985]: E0128 19:16:21.274583 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:34 crc kubenswrapper[4985]: I0128 19:16:34.264092 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:34 crc kubenswrapper[4985]: E0128 19:16:34.264972 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:48 crc kubenswrapper[4985]: I0128 19:16:48.263910 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:48 crc kubenswrapper[4985]: E0128 19:16:48.264832 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:16:59 crc kubenswrapper[4985]: I0128 19:16:59.264858 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:16:59 crc kubenswrapper[4985]: E0128 19:16:59.265786 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:14 crc kubenswrapper[4985]: I0128 19:17:14.264367 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:14 crc kubenswrapper[4985]: E0128 19:17:14.265165 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:28 crc kubenswrapper[4985]: I0128 19:17:28.264418 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:28 crc kubenswrapper[4985]: E0128 19:17:28.265194 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:43 crc kubenswrapper[4985]: I0128 19:17:43.264308 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:43 crc kubenswrapper[4985]: E0128 19:17:43.265124 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:17:54 crc kubenswrapper[4985]: I0128 19:17:54.264307 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:17:54 crc kubenswrapper[4985]: E0128 19:17:54.265053 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:08 crc kubenswrapper[4985]: I0128 19:18:08.264669 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:08 crc kubenswrapper[4985]: E0128 19:18:08.265495 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:21 crc kubenswrapper[4985]: I0128 19:18:21.272425 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:21 crc kubenswrapper[4985]: E0128 19:18:21.273245 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.264031 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:35 crc kubenswrapper[4985]: E0128 19:18:35.264803 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.542629 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:35 crc kubenswrapper[4985]: E0128 19:18:35.543697 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.543720 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.546307 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" containerName="collect-profiles" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.583899 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.597963 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740503 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740589 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.740813 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843484 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843555 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.843748 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.844421 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.844601 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.869099 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"redhat-operators-z9f59\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:35 crc kubenswrapper[4985]: I0128 19:18:35.916084 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:36 crc kubenswrapper[4985]: I0128 19:18:36.447483 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.455757 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" exitCode=0 Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.456132 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f"} Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.456165 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"355bd54575836eb89434d5f80445367bca9f1cbab148609bff229841432e69de"} Jan 28 19:18:37 crc kubenswrapper[4985]: I0128 19:18:37.459132 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:18:38 crc kubenswrapper[4985]: I0128 19:18:38.468179 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} Jan 28 19:18:45 crc kubenswrapper[4985]: E0128 19:18:45.772884 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29d3c5bf_f955_4498_a72d_b71b0bb65d6e.slice/crio-conmon-bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:18:46 crc kubenswrapper[4985]: I0128 19:18:46.576396 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" exitCode=0 Jan 28 19:18:46 crc kubenswrapper[4985]: I0128 19:18:46.576762 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} Jan 28 19:18:47 crc kubenswrapper[4985]: I0128 19:18:47.265157 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.599337 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerStarted","Data":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.608702 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} Jan 28 19:18:48 crc kubenswrapper[4985]: I0128 19:18:48.629988 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z9f59" podStartSLOduration=4.068388264 podStartE2EDuration="13.629968909s" podCreationTimestamp="2026-01-28 19:18:35 +0000 UTC" firstStartedPulling="2026-01-28 19:18:37.458844556 +0000 UTC m=+3928.285407387" lastFinishedPulling="2026-01-28 19:18:47.020425211 +0000 UTC m=+3937.846988032" observedRunningTime="2026-01-28 19:18:48.622305482 +0000 UTC m=+3939.448868313" watchObservedRunningTime="2026-01-28 19:18:48.629968909 +0000 UTC m=+3939.456531720" Jan 28 19:18:55 crc kubenswrapper[4985]: I0128 19:18:55.916551 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:55 crc kubenswrapper[4985]: I0128 19:18:55.917067 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:18:56 crc kubenswrapper[4985]: I0128 19:18:56.974085 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" probeResult="failure" output=< Jan 28 19:18:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:18:56 crc kubenswrapper[4985]: > Jan 28 19:19:07 crc kubenswrapper[4985]: I0128 19:19:07.263269 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" probeResult="failure" output=< Jan 28 19:19:07 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:19:07 crc kubenswrapper[4985]: > Jan 28 19:19:15 crc kubenswrapper[4985]: I0128 19:19:15.965988 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:16 crc kubenswrapper[4985]: I0128 19:19:16.014751 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:16 crc kubenswrapper[4985]: I0128 19:19:16.207866 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:17 crc kubenswrapper[4985]: I0128 19:19:17.930274 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-z9f59" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" containerID="cri-o://956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" gracePeriod=2 Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.585532 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.779497 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.789577 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.789638 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") pod \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\" (UID: \"29d3c5bf-f955-4498-a72d-b71b0bb65d6e\") " Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.790562 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities" (OuterVolumeSpecName: "utilities") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.797700 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f" (OuterVolumeSpecName: "kube-api-access-xlr2f") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "kube-api-access-xlr2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.893006 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xlr2f\" (UniqueName: \"kubernetes.io/projected/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-kube-api-access-xlr2f\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.893048 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.899370 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "29d3c5bf-f955-4498-a72d-b71b0bb65d6e" (UID: "29d3c5bf-f955-4498-a72d-b71b0bb65d6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948702 4985 generic.go:334] "Generic (PLEG): container finished" podID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" exitCode=0 Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948774 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948807 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z9f59" event={"ID":"29d3c5bf-f955-4498-a72d-b71b0bb65d6e","Type":"ContainerDied","Data":"355bd54575836eb89434d5f80445367bca9f1cbab148609bff229841432e69de"} Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.948826 4985 scope.go:117] "RemoveContainer" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.949028 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z9f59" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.986508 4985 scope.go:117] "RemoveContainer" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:18 crc kubenswrapper[4985]: I0128 19:19:18.996350 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/29d3c5bf-f955-4498-a72d-b71b0bb65d6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.000187 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.012328 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-z9f59"] Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.025559 4985 scope.go:117] "RemoveContainer" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.119493 4985 scope.go:117] "RemoveContainer" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.120354 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": container with ID starting with 956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff not found: ID does not exist" containerID="956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.120407 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff"} err="failed to get container status \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": rpc error: code = NotFound desc = could not find container \"956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff\": container with ID starting with 956e9b138d2389910da5c9caaa293a8566db0e058699a7d276b369e3e4b18bff not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.120440 4985 scope.go:117] "RemoveContainer" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.121296 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": container with ID starting with bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4 not found: ID does not exist" containerID="bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.121337 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4"} err="failed to get container status \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": rpc error: code = NotFound desc = could not find container \"bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4\": container with ID starting with bd3aacb8dcc95450c3a94fa162beeb93f09b6a5c92c16e2048135675c3d814a4 not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.121357 4985 scope.go:117] "RemoveContainer" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: E0128 19:19:19.122773 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": container with ID starting with bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f not found: ID does not exist" containerID="bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.122820 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f"} err="failed to get container status \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": rpc error: code = NotFound desc = could not find container \"bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f\": container with ID starting with bd6ad4cf1cedf58619a3bd9d1466446d35af01e876482ec27264abdb76c7e75f not found: ID does not exist" Jan 28 19:19:19 crc kubenswrapper[4985]: I0128 19:19:19.281046 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" path="/var/lib/kubelet/pods/29d3c5bf-f955-4498-a72d-b71b0bb65d6e/volumes" Jan 28 19:19:56 crc kubenswrapper[4985]: E0128 19:19:56.442477 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:55024->38.102.83.195:43365: write tcp 38.102.83.195:55024->38.102.83.195:43365: write: broken pipe Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.300794 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301836 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-utilities" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301850 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-utilities" Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301865 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-content" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301871 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="extract-content" Jan 28 19:21:04 crc kubenswrapper[4985]: E0128 19:21:04.301901 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.301909 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.302165 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="29d3c5bf-f955-4498-a72d-b71b0bb65d6e" containerName="registry-server" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.303999 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306200 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306238 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.306289 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.321728 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.408896 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409018 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409057 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409659 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.409680 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.437968 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"redhat-marketplace-7j52l\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:04 crc kubenswrapper[4985]: I0128 19:21:04.675701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:05 crc kubenswrapper[4985]: I0128 19:21:05.339868 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158340 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" exitCode=0 Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158545 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e"} Jan 28 19:21:06 crc kubenswrapper[4985]: I0128 19:21:06.158965 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"bc817422166edcc0a6ae8557035a413653c4ac3ad6d4d9093ca8973bcee53f57"} Jan 28 19:21:08 crc kubenswrapper[4985]: I0128 19:21:08.193834 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} Jan 28 19:21:09 crc kubenswrapper[4985]: I0128 19:21:09.205736 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" exitCode=0 Jan 28 19:21:09 crc kubenswrapper[4985]: I0128 19:21:09.205823 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} Jan 28 19:21:10 crc kubenswrapper[4985]: I0128 19:21:10.219878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerStarted","Data":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} Jan 28 19:21:10 crc kubenswrapper[4985]: I0128 19:21:10.249629 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7j52l" podStartSLOduration=2.7222573089999997 podStartE2EDuration="6.24960471s" podCreationTimestamp="2026-01-28 19:21:04 +0000 UTC" firstStartedPulling="2026-01-28 19:21:06.163235452 +0000 UTC m=+4076.989798273" lastFinishedPulling="2026-01-28 19:21:09.690582853 +0000 UTC m=+4080.517145674" observedRunningTime="2026-01-28 19:21:10.243005273 +0000 UTC m=+4081.069568094" watchObservedRunningTime="2026-01-28 19:21:10.24960471 +0000 UTC m=+4081.076167541" Jan 28 19:21:11 crc kubenswrapper[4985]: I0128 19:21:11.186409 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:11 crc kubenswrapper[4985]: I0128 19:21:11.186483 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.676384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.676735 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:14 crc kubenswrapper[4985]: I0128 19:21:14.736529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:15 crc kubenswrapper[4985]: I0128 19:21:15.348971 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:15 crc kubenswrapper[4985]: I0128 19:21:15.409034 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.300819 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7j52l" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" containerID="cri-o://729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" gracePeriod=2 Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.927369 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976425 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976555 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.976802 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") pod \"af1fd134-bd28-4422-88b4-27f389229481\" (UID: \"af1fd134-bd28-4422-88b4-27f389229481\") " Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.978305 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities" (OuterVolumeSpecName: "utilities") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.986390 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:17 crc kubenswrapper[4985]: I0128 19:21:17.990983 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll" (OuterVolumeSpecName: "kube-api-access-7jlll") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "kube-api-access-7jlll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.005387 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "af1fd134-bd28-4422-88b4-27f389229481" (UID: "af1fd134-bd28-4422-88b4-27f389229481"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.088009 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jlll\" (UniqueName: \"kubernetes.io/projected/af1fd134-bd28-4422-88b4-27f389229481-kube-api-access-7jlll\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.088050 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/af1fd134-bd28-4422-88b4-27f389229481-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316651 4985 generic.go:334] "Generic (PLEG): container finished" podID="af1fd134-bd28-4422-88b4-27f389229481" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" exitCode=0 Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316715 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316729 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7j52l" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316763 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7j52l" event={"ID":"af1fd134-bd28-4422-88b4-27f389229481","Type":"ContainerDied","Data":"bc817422166edcc0a6ae8557035a413653c4ac3ad6d4d9093ca8973bcee53f57"} Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.316790 4985 scope.go:117] "RemoveContainer" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.346103 4985 scope.go:117] "RemoveContainer" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.369331 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.380024 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7j52l"] Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.382656 4985 scope.go:117] "RemoveContainer" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425381 4985 scope.go:117] "RemoveContainer" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.425863 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": container with ID starting with 729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99 not found: ID does not exist" containerID="729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425921 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99"} err="failed to get container status \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": rpc error: code = NotFound desc = could not find container \"729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99\": container with ID starting with 729b014c1ea14d6d2cb7835e00659d59683b7207d1ae90ace6353635d1ba3a99 not found: ID does not exist" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.425957 4985 scope.go:117] "RemoveContainer" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.426368 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": container with ID starting with 9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9 not found: ID does not exist" containerID="9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426407 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9"} err="failed to get container status \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": rpc error: code = NotFound desc = could not find container \"9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9\": container with ID starting with 9a1a936de535588900c283b9631beb001deed2fb48b8e6a7d17f005154cdace9 not found: ID does not exist" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426434 4985 scope.go:117] "RemoveContainer" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: E0128 19:21:18.426824 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": container with ID starting with ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e not found: ID does not exist" containerID="ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e" Jan 28 19:21:18 crc kubenswrapper[4985]: I0128 19:21:18.426844 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e"} err="failed to get container status \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": rpc error: code = NotFound desc = could not find container \"ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e\": container with ID starting with ac069bbaec8d3387dac038e5807fbad99a6be6bc868fe0b11545e20e6e883b9e not found: ID does not exist" Jan 28 19:21:19 crc kubenswrapper[4985]: I0128 19:21:19.290856 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af1fd134-bd28-4422-88b4-27f389229481" path="/var/lib/kubelet/pods/af1fd134-bd28-4422-88b4-27f389229481/volumes" Jan 28 19:21:41 crc kubenswrapper[4985]: I0128 19:21:41.186531 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:21:41 crc kubenswrapper[4985]: I0128 19:21:41.187143 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.185848 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.186420 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.186475 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.187424 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:22:11 crc kubenswrapper[4985]: I0128 19:22:11.187506 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" gracePeriod=600 Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.190897 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" exitCode=0 Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.190939 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59"} Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.191473 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} Jan 28 19:22:12 crc kubenswrapper[4985]: I0128 19:22:12.191495 4985 scope.go:117] "RemoveContainer" containerID="ff4f3e0e85c85b9e839e6f33940f1d339697777e4b1b9c17d6d196452b07b9cf" Jan 28 19:24:11 crc kubenswrapper[4985]: I0128 19:24:11.186112 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:24:11 crc kubenswrapper[4985]: I0128 19:24:11.186666 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:24:41 crc kubenswrapper[4985]: I0128 19:24:41.186110 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:24:41 crc kubenswrapper[4985]: I0128 19:24:41.186707 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.186323 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.187123 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.187190 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.191377 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:25:11 crc kubenswrapper[4985]: I0128 19:25:11.191540 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" gracePeriod=600 Jan 28 19:25:11 crc kubenswrapper[4985]: E0128 19:25:11.330953 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232169 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" exitCode=0 Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680"} Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.232532 4985 scope.go:117] "RemoveContainer" containerID="4009eafc6fc98f5bba47d16fef1bdf99ca37bd45a3ef67b66f8ba8cec4bf0f59" Jan 28 19:25:12 crc kubenswrapper[4985]: I0128 19:25:12.233457 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:12 crc kubenswrapper[4985]: E0128 19:25:12.233993 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:27 crc kubenswrapper[4985]: I0128 19:25:27.264781 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:27 crc kubenswrapper[4985]: E0128 19:25:27.265951 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:41 crc kubenswrapper[4985]: I0128 19:25:41.271971 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:41 crc kubenswrapper[4985]: E0128 19:25:41.273922 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:25:55 crc kubenswrapper[4985]: I0128 19:25:55.264387 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:25:55 crc kubenswrapper[4985]: E0128 19:25:55.265136 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:10 crc kubenswrapper[4985]: I0128 19:26:10.264464 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:10 crc kubenswrapper[4985]: E0128 19:26:10.265870 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:24 crc kubenswrapper[4985]: I0128 19:26:24.264629 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:24 crc kubenswrapper[4985]: E0128 19:26:24.265369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:37 crc kubenswrapper[4985]: I0128 19:26:37.264621 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:37 crc kubenswrapper[4985]: E0128 19:26:37.265655 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:26:49 crc kubenswrapper[4985]: I0128 19:26:49.263952 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:26:49 crc kubenswrapper[4985]: E0128 19:26:49.264911 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:00 crc kubenswrapper[4985]: I0128 19:27:00.264341 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:00 crc kubenswrapper[4985]: E0128 19:27:00.265296 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:11 crc kubenswrapper[4985]: I0128 19:27:11.271366 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:11 crc kubenswrapper[4985]: E0128 19:27:11.272317 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:24 crc kubenswrapper[4985]: I0128 19:27:24.264510 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:24 crc kubenswrapper[4985]: E0128 19:27:24.265199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:39 crc kubenswrapper[4985]: I0128 19:27:39.263941 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:39 crc kubenswrapper[4985]: E0128 19:27:39.264690 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:27:52 crc kubenswrapper[4985]: I0128 19:27:52.266009 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:27:52 crc kubenswrapper[4985]: E0128 19:27:52.266903 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.279968 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281026 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281046 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281082 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-utilities" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281090 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-utilities" Jan 28 19:28:05 crc kubenswrapper[4985]: E0128 19:28:05.281111 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-content" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281119 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="extract-content" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.281403 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="af1fd134-bd28-4422-88b4-27f389229481" containerName="registry-server" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.283500 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.283628 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.364580 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.368524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.368657 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471455 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471521 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.471578 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.472094 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.472191 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:05 crc kubenswrapper[4985]: I0128 19:28:05.931039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"certified-operators-g5d6k\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.220299 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.265533 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:06 crc kubenswrapper[4985]: E0128 19:28:06.265768 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:06 crc kubenswrapper[4985]: I0128 19:28:06.747300 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.049430 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.053861 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.074718 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113420 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.113640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215549 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215633 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.215714 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.216162 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.216186 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.237042 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"community-operators-dwkk7\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364415 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20" exitCode=0 Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364478 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20"} Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.364515 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"f995d9e0fe7cc52e4e2477b23584afbe7acdcdaaff398007005dc0deaba49a75"} Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.367172 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:28:07 crc kubenswrapper[4985]: I0128 19:28:07.430029 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:08 crc kubenswrapper[4985]: I0128 19:28:08.381770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929"} Jan 28 19:28:08 crc kubenswrapper[4985]: I0128 19:28:08.524109 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396403 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba" exitCode=0 Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396516 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba"} Jan 28 19:28:09 crc kubenswrapper[4985]: I0128 19:28:09.396610 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"5d8d8e16e03ffc2f078f992a22dea1222e612d0595de642ee60d2ae1e024af47"} Jan 28 19:28:10 crc kubenswrapper[4985]: I0128 19:28:10.411235 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929" exitCode=0 Jan 28 19:28:10 crc kubenswrapper[4985]: I0128 19:28:10.411290 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.423552 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.431777 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerStarted","Data":"5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73"} Jan 28 19:28:11 crc kubenswrapper[4985]: I0128 19:28:11.465736 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g5d6k" podStartSLOduration=2.82545814 podStartE2EDuration="6.465717258s" podCreationTimestamp="2026-01-28 19:28:05 +0000 UTC" firstStartedPulling="2026-01-28 19:28:07.366815125 +0000 UTC m=+4498.193377956" lastFinishedPulling="2026-01-28 19:28:11.007074243 +0000 UTC m=+4501.833637074" observedRunningTime="2026-01-28 19:28:11.462144567 +0000 UTC m=+4502.288707478" watchObservedRunningTime="2026-01-28 19:28:11.465717258 +0000 UTC m=+4502.292280089" Jan 28 19:28:13 crc kubenswrapper[4985]: I0128 19:28:13.456810 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4" exitCode=0 Jan 28 19:28:13 crc kubenswrapper[4985]: I0128 19:28:13.456892 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4"} Jan 28 19:28:14 crc kubenswrapper[4985]: I0128 19:28:14.476123 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerStarted","Data":"23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc"} Jan 28 19:28:14 crc kubenswrapper[4985]: I0128 19:28:14.521458 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dwkk7" podStartSLOduration=3.04157915 podStartE2EDuration="7.521431288s" podCreationTimestamp="2026-01-28 19:28:07 +0000 UTC" firstStartedPulling="2026-01-28 19:28:09.401754546 +0000 UTC m=+4500.228317367" lastFinishedPulling="2026-01-28 19:28:13.881606654 +0000 UTC m=+4504.708169505" observedRunningTime="2026-01-28 19:28:14.500228688 +0000 UTC m=+4505.326791549" watchObservedRunningTime="2026-01-28 19:28:14.521431288 +0000 UTC m=+4505.347994109" Jan 28 19:28:16 crc kubenswrapper[4985]: I0128 19:28:16.220653 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:16 crc kubenswrapper[4985]: I0128 19:28:16.221043 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.272788 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-g5d6k" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" probeResult="failure" output=< Jan 28 19:28:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:28:17 crc kubenswrapper[4985]: > Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.431180 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.431244 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:17 crc kubenswrapper[4985]: I0128 19:28:17.583561 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:20 crc kubenswrapper[4985]: I0128 19:28:20.265061 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:20 crc kubenswrapper[4985]: E0128 19:28:20.266128 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.272886 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.323791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:26 crc kubenswrapper[4985]: I0128 19:28:26.516776 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:27 crc kubenswrapper[4985]: I0128 19:28:27.488568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:27 crc kubenswrapper[4985]: I0128 19:28:27.630070 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g5d6k" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" containerID="cri-o://5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" gracePeriod=2 Jan 28 19:28:28 crc kubenswrapper[4985]: E0128 19:28:28.320043 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7bd660cc_bac3_40a2_baf1_d27477b66355.slice/crio-5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73.scope\": RecentStats: unable to find data in memory cache]" Jan 28 19:28:28 crc kubenswrapper[4985]: I0128 19:28:28.651319 4985 generic.go:334] "Generic (PLEG): container finished" podID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerID="5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" exitCode=0 Jan 28 19:28:28 crc kubenswrapper[4985]: I0128 19:28:28.651428 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73"} Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.085213 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200043 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200108 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.200310 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") pod \"7bd660cc-bac3-40a2-baf1-d27477b66355\" (UID: \"7bd660cc-bac3-40a2-baf1-d27477b66355\") " Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.201234 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities" (OuterVolumeSpecName: "utilities") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.201770 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.207832 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x" (OuterVolumeSpecName: "kube-api-access-65w6x") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "kube-api-access-65w6x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.256603 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7bd660cc-bac3-40a2-baf1-d27477b66355" (UID: "7bd660cc-bac3-40a2-baf1-d27477b66355"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.305242 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7bd660cc-bac3-40a2-baf1-d27477b66355-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.305287 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-65w6x\" (UniqueName: \"kubernetes.io/projected/7bd660cc-bac3-40a2-baf1-d27477b66355-kube-api-access-65w6x\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668445 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g5d6k" event={"ID":"7bd660cc-bac3-40a2-baf1-d27477b66355","Type":"ContainerDied","Data":"f995d9e0fe7cc52e4e2477b23584afbe7acdcdaaff398007005dc0deaba49a75"} Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668501 4985 scope.go:117] "RemoveContainer" containerID="5508c07a73c0a5675698c73285af2e9603f79d518f2dfc72f90fc1797df3fd73" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.668659 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g5d6k" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.707940 4985 scope.go:117] "RemoveContainer" containerID="4152802d09478a45d44a174e418e640afbf94234635886a9d8d380306df85929" Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.708656 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:29 crc kubenswrapper[4985]: I0128 19:28:29.730811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g5d6k"] Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.148909 4985 scope.go:117] "RemoveContainer" containerID="69d4e05fa8611628adda8b6890905569708e909b85dd0cae338b974b7963ab20" Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.316556 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.316855 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dwkk7" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" containerID="cri-o://23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" gracePeriod=2 Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.697379 4985 generic.go:334] "Generic (PLEG): container finished" podID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerID="23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" exitCode=0 Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.697743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc"} Jan 28 19:28:30 crc kubenswrapper[4985]: I0128 19:28:30.917178 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048304 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048458 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.048571 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") pod \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\" (UID: \"15cde5ed-b5df-4ebd-9dc3-417d405ad81e\") " Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.049396 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities" (OuterVolumeSpecName: "utilities") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.055581 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks" (OuterVolumeSpecName: "kube-api-access-vx6ks") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "kube-api-access-vx6ks". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.100431 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "15cde5ed-b5df-4ebd-9dc3-417d405ad81e" (UID: "15cde5ed-b5df-4ebd-9dc3-417d405ad81e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151037 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151294 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vx6ks\" (UniqueName: \"kubernetes.io/projected/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-kube-api-access-vx6ks\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.151371 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15cde5ed-b5df-4ebd-9dc3-417d405ad81e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.282617 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" path="/var/lib/kubelet/pods/7bd660cc-bac3-40a2-baf1-d27477b66355/volumes" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713360 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dwkk7" event={"ID":"15cde5ed-b5df-4ebd-9dc3-417d405ad81e","Type":"ContainerDied","Data":"5d8d8e16e03ffc2f078f992a22dea1222e612d0595de642ee60d2ae1e024af47"} Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713416 4985 scope.go:117] "RemoveContainer" containerID="23414830f730e9c3568e5d8028f59964e25d3291603706489ec85f15964ff5fc" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.713472 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dwkk7" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.740290 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.741342 4985 scope.go:117] "RemoveContainer" containerID="f233cfdbfd8ae96be208118bf4d667f20725f55748c7d7e2f273e8c3f12f44d4" Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.755375 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dwkk7"] Jan 28 19:28:31 crc kubenswrapper[4985]: I0128 19:28:31.769803 4985 scope.go:117] "RemoveContainer" containerID="f7e71cc3aa266e86642df0368ccd0be0c9024e06e8dd76ed47af29f9b0389fba" Jan 28 19:28:32 crc kubenswrapper[4985]: I0128 19:28:32.265311 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:32 crc kubenswrapper[4985]: E0128 19:28:32.265727 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:33 crc kubenswrapper[4985]: I0128 19:28:33.279000 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" path="/var/lib/kubelet/pods/15cde5ed-b5df-4ebd-9dc3-417d405ad81e/volumes" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.146944 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148359 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148381 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148411 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148422 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148438 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148449 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148470 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148480 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148511 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148522 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="extract-utilities" Jan 28 19:28:43 crc kubenswrapper[4985]: E0128 19:28:43.148578 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148591 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="extract-content" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.148968 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bd660cc-bac3-40a2-baf1-d27477b66355" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.149015 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="15cde5ed-b5df-4ebd-9dc3-417d405ad81e" containerName="registry-server" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.152068 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.161294 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265129 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265642 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.265689 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368120 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368251 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.368332 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.369027 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.369019 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.416242 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"redhat-operators-h9jhp\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:43 crc kubenswrapper[4985]: I0128 19:28:43.475370 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.083376 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881406 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" exitCode=0 Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e"} Jan 28 19:28:44 crc kubenswrapper[4985]: I0128 19:28:44.881712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"efb6ed0d1fc336a0b4e1274c9acba02fb5e05bd0a081461a30985004bf135538"} Jan 28 19:28:45 crc kubenswrapper[4985]: I0128 19:28:45.897895 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} Jan 28 19:28:47 crc kubenswrapper[4985]: I0128 19:28:47.268254 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:28:47 crc kubenswrapper[4985]: E0128 19:28:47.270113 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:28:51 crc kubenswrapper[4985]: I0128 19:28:51.964900 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" exitCode=0 Jan 28 19:28:51 crc kubenswrapper[4985]: I0128 19:28:51.964958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} Jan 28 19:28:52 crc kubenswrapper[4985]: I0128 19:28:52.997007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerStarted","Data":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.029169 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-h9jhp" podStartSLOduration=2.519334965 podStartE2EDuration="10.029143743s" podCreationTimestamp="2026-01-28 19:28:43 +0000 UTC" firstStartedPulling="2026-01-28 19:28:44.885239993 +0000 UTC m=+4535.711802814" lastFinishedPulling="2026-01-28 19:28:52.395048771 +0000 UTC m=+4543.221611592" observedRunningTime="2026-01-28 19:28:53.018585984 +0000 UTC m=+4543.845148815" watchObservedRunningTime="2026-01-28 19:28:53.029143743 +0000 UTC m=+4543.855706564" Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.476792 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:53 crc kubenswrapper[4985]: I0128 19:28:53.476842 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:28:54 crc kubenswrapper[4985]: I0128 19:28:54.711003 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:28:54 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:28:54 crc kubenswrapper[4985]: > Jan 28 19:29:00 crc kubenswrapper[4985]: I0128 19:29:00.263900 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:00 crc kubenswrapper[4985]: E0128 19:29:00.264688 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:04 crc kubenswrapper[4985]: I0128 19:29:04.531918 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:29:04 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:29:04 crc kubenswrapper[4985]: > Jan 28 19:29:12 crc kubenswrapper[4985]: I0128 19:29:12.264457 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:12 crc kubenswrapper[4985]: E0128 19:29:12.265296 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:14 crc kubenswrapper[4985]: I0128 19:29:14.695010 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" probeResult="failure" output=< Jan 28 19:29:14 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:29:14 crc kubenswrapper[4985]: > Jan 28 19:29:23 crc kubenswrapper[4985]: I0128 19:29:23.556525 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:23 crc kubenswrapper[4985]: I0128 19:29:23.620628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:24 crc kubenswrapper[4985]: I0128 19:29:24.358836 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:25 crc kubenswrapper[4985]: I0128 19:29:25.264625 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:25 crc kubenswrapper[4985]: E0128 19:29:25.265311 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:25 crc kubenswrapper[4985]: I0128 19:29:25.332506 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-h9jhp" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" containerID="cri-o://c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" gracePeriod=2 Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.287789 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353710 4985 generic.go:334] "Generic (PLEG): container finished" podID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" exitCode=0 Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353761 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353788 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-h9jhp" event={"ID":"79e005da-4531-450b-a74b-ff8d59a5d3cd","Type":"ContainerDied","Data":"efb6ed0d1fc336a0b4e1274c9acba02fb5e05bd0a081461a30985004bf135538"} Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353790 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-h9jhp" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.353804 4985 scope.go:117] "RemoveContainer" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.382510 4985 scope.go:117] "RemoveContainer" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.414320 4985 scope.go:117] "RemoveContainer" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.448813 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.448952 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.449032 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") pod \"79e005da-4531-450b-a74b-ff8d59a5d3cd\" (UID: \"79e005da-4531-450b-a74b-ff8d59a5d3cd\") " Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.450101 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities" (OuterVolumeSpecName: "utilities") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.455619 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr" (OuterVolumeSpecName: "kube-api-access-k6hhr") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "kube-api-access-k6hhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.532233 4985 scope.go:117] "RemoveContainer" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.533298 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": container with ID starting with c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d not found: ID does not exist" containerID="c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.533463 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d"} err="failed to get container status \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": rpc error: code = NotFound desc = could not find container \"c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d\": container with ID starting with c6d4a146f7a9efd220d0ceeb5b08c1e0b8536502cebe19c338496ff361d0656d not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.533563 4985 scope.go:117] "RemoveContainer" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.534081 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": container with ID starting with 2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157 not found: ID does not exist" containerID="2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534123 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157"} err="failed to get container status \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": rpc error: code = NotFound desc = could not find container \"2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157\": container with ID starting with 2c566471940653650afdd45cc7f71f42c5e96e7901bd4352665295ad1a6d6157 not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534154 4985 scope.go:117] "RemoveContainer" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: E0128 19:29:26.534524 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": container with ID starting with 9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e not found: ID does not exist" containerID="9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.534575 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e"} err="failed to get container status \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": rpc error: code = NotFound desc = could not find container \"9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e\": container with ID starting with 9f37bb1c1b4d4d721a4a2a07ac936cec6c7ff3ccc95920aca7d0df4a7de7c42e not found: ID does not exist" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.551851 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.552101 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k6hhr\" (UniqueName: \"kubernetes.io/projected/79e005da-4531-450b-a74b-ff8d59a5d3cd-kube-api-access-k6hhr\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.565038 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "79e005da-4531-450b-a74b-ff8d59a5d3cd" (UID: "79e005da-4531-450b-a74b-ff8d59a5d3cd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.654729 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/79e005da-4531-450b-a74b-ff8d59a5d3cd-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.696763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:26 crc kubenswrapper[4985]: I0128 19:29:26.707444 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-h9jhp"] Jan 28 19:29:27 crc kubenswrapper[4985]: I0128 19:29:27.279877 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" path="/var/lib/kubelet/pods/79e005da-4531-450b-a74b-ff8d59a5d3cd/volumes" Jan 28 19:29:39 crc kubenswrapper[4985]: I0128 19:29:39.264719 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:39 crc kubenswrapper[4985]: E0128 19:29:39.265616 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:29:50 crc kubenswrapper[4985]: I0128 19:29:50.264329 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:29:50 crc kubenswrapper[4985]: E0128 19:29:50.265550 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.171727 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172840 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172858 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-utilities" Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172881 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172888 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="extract-content" Jan 28 19:30:00 crc kubenswrapper[4985]: E0128 19:30:00.172909 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.172917 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.173167 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="79e005da-4531-450b-a74b-ff8d59a5d3cd" containerName="registry-server" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.174272 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.177167 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.178635 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.186554 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282174 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282537 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.282701 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385448 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385687 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.385808 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.387883 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.933795 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:00 crc kubenswrapper[4985]: I0128 19:30:00.934098 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"collect-profiles-29493810-v5pld\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:01 crc kubenswrapper[4985]: I0128 19:30:01.097454 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:01 crc kubenswrapper[4985]: I0128 19:30:01.758388 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.265402 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:30:02 crc kubenswrapper[4985]: E0128 19:30:02.266642 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734021 4985 generic.go:334] "Generic (PLEG): container finished" podID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerID="6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3" exitCode=0 Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734106 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerDied","Data":"6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3"} Jan 28 19:30:02 crc kubenswrapper[4985]: I0128 19:30:02.734135 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerStarted","Data":"fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb"} Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.319639 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.391946 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.392126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.392235 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") pod \"2bbf5b95-eb34-48ce-970a-48eec581f83b\" (UID: \"2bbf5b95-eb34-48ce-970a-48eec581f83b\") " Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.394087 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume" (OuterVolumeSpecName: "config-volume") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.397792 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.397910 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw" (OuterVolumeSpecName: "kube-api-access-89dlw") pod "2bbf5b95-eb34-48ce-970a-48eec581f83b" (UID: "2bbf5b95-eb34-48ce-970a-48eec581f83b"). InnerVolumeSpecName "kube-api-access-89dlw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495096 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-89dlw\" (UniqueName: \"kubernetes.io/projected/2bbf5b95-eb34-48ce-970a-48eec581f83b-kube-api-access-89dlw\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495135 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2bbf5b95-eb34-48ce-970a-48eec581f83b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.495148 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2bbf5b95-eb34-48ce-970a-48eec581f83b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756474 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" event={"ID":"2bbf5b95-eb34-48ce-970a-48eec581f83b","Type":"ContainerDied","Data":"fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb"} Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756528 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld" Jan 28 19:30:04 crc kubenswrapper[4985]: I0128 19:30:04.756538 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fa49be7150b1c0c3de249e0f82a0d01e7d454a343f93122c77700aaa7b38c1fb" Jan 28 19:30:05 crc kubenswrapper[4985]: I0128 19:30:05.411564 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 19:30:05 crc kubenswrapper[4985]: I0128 19:30:05.422188 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493765-l92vx"] Jan 28 19:30:07 crc kubenswrapper[4985]: I0128 19:30:07.282162 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62198283-1005-48a7-91a7-44d4240224ef" path="/var/lib/kubelet/pods/62198283-1005-48a7-91a7-44d4240224ef/volumes" Jan 28 19:30:15 crc kubenswrapper[4985]: I0128 19:30:15.265380 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:30:15 crc kubenswrapper[4985]: I0128 19:30:15.886954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} Jan 28 19:30:42 crc kubenswrapper[4985]: I0128 19:30:42.738378 4985 scope.go:117] "RemoveContainer" containerID="e7f4c4199443b277fce34519a5f0cc3daf60a217d86701b9fd4cb717d8480164" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.384837 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:22 crc kubenswrapper[4985]: E0128 19:31:22.386089 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.386106 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.386502 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" containerName="collect-profiles" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.388598 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.405644 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528863 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528932 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.528970 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632431 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632491 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.632515 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.633484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.633589 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.659034 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"redhat-marketplace-4pgtm\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:22 crc kubenswrapper[4985]: I0128 19:31:22.767960 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.293442 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:23 crc kubenswrapper[4985]: W0128 19:31:23.299623 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3cc63e1e_427e_4268_bd2a_0137da7b65a9.slice/crio-761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d WatchSource:0}: Error finding container 761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d: Status 404 returned error can't find the container with id 761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725227 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" exitCode=0 Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725291 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12"} Jan 28 19:31:23 crc kubenswrapper[4985]: I0128 19:31:23.725318 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d"} Jan 28 19:31:24 crc kubenswrapper[4985]: I0128 19:31:24.742410 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} Jan 28 19:31:26 crc kubenswrapper[4985]: I0128 19:31:26.766822 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" exitCode=0 Jan 28 19:31:26 crc kubenswrapper[4985]: I0128 19:31:26.766954 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} Jan 28 19:31:27 crc kubenswrapper[4985]: I0128 19:31:27.782413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerStarted","Data":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} Jan 28 19:31:27 crc kubenswrapper[4985]: I0128 19:31:27.800631 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4pgtm" podStartSLOduration=2.380400708 podStartE2EDuration="5.800610837s" podCreationTimestamp="2026-01-28 19:31:22 +0000 UTC" firstStartedPulling="2026-01-28 19:31:23.729387027 +0000 UTC m=+4694.555949878" lastFinishedPulling="2026-01-28 19:31:27.149597146 +0000 UTC m=+4697.976160007" observedRunningTime="2026-01-28 19:31:27.798274211 +0000 UTC m=+4698.624837042" watchObservedRunningTime="2026-01-28 19:31:27.800610837 +0000 UTC m=+4698.627173648" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.768949 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.769547 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.834090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:32 crc kubenswrapper[4985]: I0128 19:31:32.923721 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:33 crc kubenswrapper[4985]: I0128 19:31:33.087950 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:34 crc kubenswrapper[4985]: I0128 19:31:34.869081 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4pgtm" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" containerID="cri-o://b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" gracePeriod=2 Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.522599 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.589778 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.590160 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.590241 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") pod \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\" (UID: \"3cc63e1e-427e-4268-bd2a-0137da7b65a9\") " Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.591278 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities" (OuterVolumeSpecName: "utilities") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.599336 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s" (OuterVolumeSpecName: "kube-api-access-xf42s") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "kube-api-access-xf42s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.628202 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3cc63e1e-427e-4268-bd2a-0137da7b65a9" (UID: "3cc63e1e-427e-4268-bd2a-0137da7b65a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692889 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xf42s\" (UniqueName: \"kubernetes.io/projected/3cc63e1e-427e-4268-bd2a-0137da7b65a9-kube-api-access-xf42s\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692943 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.692953 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3cc63e1e-427e-4268-bd2a-0137da7b65a9-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886119 4985 generic.go:334] "Generic (PLEG): container finished" podID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" exitCode=0 Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886167 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886205 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4pgtm" event={"ID":"3cc63e1e-427e-4268-bd2a-0137da7b65a9","Type":"ContainerDied","Data":"761512f5c759e39a3d23c6ac2f7ed526651b7536e7f848c0e7f354de9dc8954d"} Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886375 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4pgtm" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.886226 4985 scope.go:117] "RemoveContainer" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.922301 4985 scope.go:117] "RemoveContainer" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.945697 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.956887 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4pgtm"] Jan 28 19:31:35 crc kubenswrapper[4985]: I0128 19:31:35.958052 4985 scope.go:117] "RemoveContainer" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.011417 4985 scope.go:117] "RemoveContainer" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.012024 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": container with ID starting with b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9 not found: ID does not exist" containerID="b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012058 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9"} err="failed to get container status \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": rpc error: code = NotFound desc = could not find container \"b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9\": container with ID starting with b5c496c565d56a145d4b5edb5df5d677b4b7f793f85af0216cc5a885733b97f9 not found: ID does not exist" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012080 4985 scope.go:117] "RemoveContainer" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.012622 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": container with ID starting with a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880 not found: ID does not exist" containerID="a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012668 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880"} err="failed to get container status \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": rpc error: code = NotFound desc = could not find container \"a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880\": container with ID starting with a2cace2f7ea67c812b1f1f029f36d00dfc2888ac63c6cd656c4a8d279e01c880 not found: ID does not exist" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.012697 4985 scope.go:117] "RemoveContainer" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: E0128 19:31:36.013026 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": container with ID starting with 411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12 not found: ID does not exist" containerID="411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12" Jan 28 19:31:36 crc kubenswrapper[4985]: I0128 19:31:36.013085 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12"} err="failed to get container status \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": rpc error: code = NotFound desc = could not find container \"411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12\": container with ID starting with 411c223a616d7c11aa511c4c05ddbcf627cbd4a903904f7c7439fb8feef59d12 not found: ID does not exist" Jan 28 19:31:37 crc kubenswrapper[4985]: I0128 19:31:37.282930 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" path="/var/lib/kubelet/pods/3cc63e1e-427e-4268-bd2a-0137da7b65a9/volumes" Jan 28 19:32:41 crc kubenswrapper[4985]: I0128 19:32:41.186278 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:32:41 crc kubenswrapper[4985]: I0128 19:32:41.186951 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:11 crc kubenswrapper[4985]: I0128 19:33:11.186328 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:11 crc kubenswrapper[4985]: I0128 19:33:11.187080 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.185934 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.186540 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.186599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.187302 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.187367 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" gracePeriod=600 Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.570911 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" exitCode=0 Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.571040 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675"} Jan 28 19:33:41 crc kubenswrapper[4985]: I0128 19:33:41.571469 4985 scope.go:117] "RemoveContainer" containerID="91584df7ca5b5d912bfd8da4ceff63f9d67ec2b84dc0db72d36c4916ac176680" Jan 28 19:33:42 crc kubenswrapper[4985]: I0128 19:33:42.589552 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} Jan 28 19:35:41 crc kubenswrapper[4985]: I0128 19:35:41.185992 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:35:41 crc kubenswrapper[4985]: I0128 19:35:41.186551 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:11 crc kubenswrapper[4985]: I0128 19:36:11.186500 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:36:11 crc kubenswrapper[4985]: I0128 19:36:11.187178 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.186582 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.187189 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.187267 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.188347 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.188437 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" gracePeriod=600 Jan 28 19:36:41 crc kubenswrapper[4985]: E0128 19:36:41.308390 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901514 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" exitCode=0 Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901571 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e"} Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.901611 4985 scope.go:117] "RemoveContainer" containerID="7f1a663e4711d5d267c37ae57a46f3735e8e9b6974b9957aacc5ac3d58d3e675" Jan 28 19:36:41 crc kubenswrapper[4985]: I0128 19:36:41.906144 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:36:41 crc kubenswrapper[4985]: E0128 19:36:41.907844 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:36:53 crc kubenswrapper[4985]: I0128 19:36:53.264201 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:36:53 crc kubenswrapper[4985]: E0128 19:36:53.264974 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:08 crc kubenswrapper[4985]: I0128 19:37:08.265293 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:08 crc kubenswrapper[4985]: E0128 19:37:08.266528 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:22 crc kubenswrapper[4985]: I0128 19:37:22.298700 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:22 crc kubenswrapper[4985]: E0128 19:37:22.300944 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:33 crc kubenswrapper[4985]: I0128 19:37:33.264155 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:33 crc kubenswrapper[4985]: E0128 19:37:33.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:37:47 crc kubenswrapper[4985]: I0128 19:37:47.265653 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:37:47 crc kubenswrapper[4985]: E0128 19:37:47.267488 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:00 crc kubenswrapper[4985]: I0128 19:38:00.239562 4985 trace.go:236] Trace[1046175934]: "Calculate volume metrics of ovndbcluster-sb-etc-ovn for pod openstack/ovsdbserver-sb-0" (28-Jan-2026 19:37:59.179) (total time: 1059ms): Jan 28 19:38:00 crc kubenswrapper[4985]: Trace[1046175934]: [1.059137805s] [1.059137805s] END Jan 28 19:38:02 crc kubenswrapper[4985]: I0128 19:38:02.264959 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:02 crc kubenswrapper[4985]: E0128 19:38:02.266006 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.606534 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607732 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-utilities" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607751 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-utilities" Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607774 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607782 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: E0128 19:38:16.607832 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-content" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.607840 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="extract-content" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.608120 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3cc63e1e-427e-4268-bd2a-0137da7b65a9" containerName="registry-server" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.610303 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.645832 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.646121 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.646244 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.647459 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749663 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749803 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.749934 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.750674 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.750716 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.776685 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"community-operators-n5dzq\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:16 crc kubenswrapper[4985]: I0128 19:38:16.948692 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:17 crc kubenswrapper[4985]: I0128 19:38:17.264330 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:17 crc kubenswrapper[4985]: E0128 19:38:17.265864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:17 crc kubenswrapper[4985]: W0128 19:38:17.519990 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8194ba08_4eee_42cf_90e5_997fed0b6208.slice/crio-0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da WatchSource:0}: Error finding container 0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da: Status 404 returned error can't find the container with id 0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da Jan 28 19:38:17 crc kubenswrapper[4985]: I0128 19:38:17.520043 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174237 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" exitCode=0 Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d"} Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.174922 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da"} Jan 28 19:38:18 crc kubenswrapper[4985]: I0128 19:38:18.180942 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:38:19 crc kubenswrapper[4985]: I0128 19:38:19.190746 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} Jan 28 19:38:21 crc kubenswrapper[4985]: I0128 19:38:21.218393 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" exitCode=0 Jan 28 19:38:21 crc kubenswrapper[4985]: I0128 19:38:21.218459 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} Jan 28 19:38:22 crc kubenswrapper[4985]: I0128 19:38:22.236646 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerStarted","Data":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} Jan 28 19:38:22 crc kubenswrapper[4985]: I0128 19:38:22.280845 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n5dzq" podStartSLOduration=2.7557350510000003 podStartE2EDuration="6.280817529s" podCreationTimestamp="2026-01-28 19:38:16 +0000 UTC" firstStartedPulling="2026-01-28 19:38:18.18065021 +0000 UTC m=+5109.007213041" lastFinishedPulling="2026-01-28 19:38:21.705732698 +0000 UTC m=+5112.532295519" observedRunningTime="2026-01-28 19:38:22.262806919 +0000 UTC m=+5113.089369740" watchObservedRunningTime="2026-01-28 19:38:22.280817529 +0000 UTC m=+5113.107380390" Jan 28 19:38:26 crc kubenswrapper[4985]: I0128 19:38:26.950310 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:26 crc kubenswrapper[4985]: I0128 19:38:26.950861 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.031230 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.356161 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:27 crc kubenswrapper[4985]: I0128 19:38:27.415459 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:29 crc kubenswrapper[4985]: I0128 19:38:29.316354 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n5dzq" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" containerID="cri-o://ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" gracePeriod=2 Jan 28 19:38:29 crc kubenswrapper[4985]: I0128 19:38:29.913998 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023066 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023141 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.023338 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") pod \"8194ba08-4eee-42cf-90e5-997fed0b6208\" (UID: \"8194ba08-4eee-42cf-90e5-997fed0b6208\") " Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.025442 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities" (OuterVolumeSpecName: "utilities") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.031146 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr" (OuterVolumeSpecName: "kube-api-access-mh6vr") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "kube-api-access-mh6vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.097781 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8194ba08-4eee-42cf-90e5-997fed0b6208" (UID: "8194ba08-4eee-42cf-90e5-997fed0b6208"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130550 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130599 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8194ba08-4eee-42cf-90e5-997fed0b6208-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.130615 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mh6vr\" (UniqueName: \"kubernetes.io/projected/8194ba08-4eee-42cf-90e5-997fed0b6208-kube-api-access-mh6vr\") on node \"crc\" DevicePath \"\"" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331398 4985 generic.go:334] "Generic (PLEG): container finished" podID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" exitCode=0 Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331462 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331502 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n5dzq" event={"ID":"8194ba08-4eee-42cf-90e5-997fed0b6208","Type":"ContainerDied","Data":"0244b45b0b3d3dd02980d7de91a9ca68aa56b8f93ce2d6b2fdfaf9c5c6ef80da"} Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331531 4985 scope.go:117] "RemoveContainer" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.331734 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n5dzq" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.356660 4985 scope.go:117] "RemoveContainer" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.392833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.400811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n5dzq"] Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.404720 4985 scope.go:117] "RemoveContainer" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.460857 4985 scope.go:117] "RemoveContainer" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.461399 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": container with ID starting with ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229 not found: ID does not exist" containerID="ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461469 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229"} err="failed to get container status \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": rpc error: code = NotFound desc = could not find container \"ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229\": container with ID starting with ba847d8ad45a29cf92594c272474d7276707e2f93d56f4e3e5c4df12e5a39229 not found: ID does not exist" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461509 4985 scope.go:117] "RemoveContainer" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.461947 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": container with ID starting with 5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc not found: ID does not exist" containerID="5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.461999 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc"} err="failed to get container status \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": rpc error: code = NotFound desc = could not find container \"5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc\": container with ID starting with 5186be1c63b63c6ad72210c3c7f1ee139c29b671050a8576144d13b67afd1cdc not found: ID does not exist" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.462038 4985 scope.go:117] "RemoveContainer" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: E0128 19:38:30.462421 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": container with ID starting with 6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d not found: ID does not exist" containerID="6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d" Jan 28 19:38:30 crc kubenswrapper[4985]: I0128 19:38:30.462456 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d"} err="failed to get container status \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": rpc error: code = NotFound desc = could not find container \"6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d\": container with ID starting with 6f837c5a6b3ee96035ba941aeb8b9eac49d2479289e1f62105e2e2cb992c999d not found: ID does not exist" Jan 28 19:38:31 crc kubenswrapper[4985]: I0128 19:38:31.276019 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:31 crc kubenswrapper[4985]: E0128 19:38:31.276735 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:31 crc kubenswrapper[4985]: I0128 19:38:31.280846 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" path="/var/lib/kubelet/pods/8194ba08-4eee-42cf-90e5-997fed0b6208/volumes" Jan 28 19:38:42 crc kubenswrapper[4985]: I0128 19:38:42.265025 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:42 crc kubenswrapper[4985]: E0128 19:38:42.265908 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:38:56 crc kubenswrapper[4985]: I0128 19:38:56.264166 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:38:56 crc kubenswrapper[4985]: E0128 19:38:56.265120 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:08 crc kubenswrapper[4985]: I0128 19:39:08.264289 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:08 crc kubenswrapper[4985]: E0128 19:39:08.265660 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:21 crc kubenswrapper[4985]: I0128 19:39:21.310275 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:21 crc kubenswrapper[4985]: E0128 19:39:21.314048 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:33 crc kubenswrapper[4985]: I0128 19:39:33.874621 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:33 crc kubenswrapper[4985]: E0128 19:39:33.876983 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:45 crc kubenswrapper[4985]: I0128 19:39:45.264659 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:45 crc kubenswrapper[4985]: E0128 19:39:45.267239 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.414451 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.416737 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-content" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.416859 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-content" Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.416953 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417031 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: E0128 19:39:52.417134 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-utilities" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417211 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="extract-utilities" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.417631 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="8194ba08-4eee-42cf-90e5-997fed0b6208" containerName="registry-server" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.419959 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.446375 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.455860 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.456515 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.456716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559089 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559164 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.559220 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.560004 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.560039 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.592071 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"redhat-operators-x7mbz\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:52 crc kubenswrapper[4985]: I0128 19:39:52.759120 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:39:53 crc kubenswrapper[4985]: I0128 19:39:53.308880 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:39:54 crc kubenswrapper[4985]: I0128 19:39:54.204685 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} Jan 28 19:39:54 crc kubenswrapper[4985]: I0128 19:39:54.205018 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"c060460c2a544abd567f16e8afbf161a027f4391d1a578f020dab8a0d2e7a75e"} Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.221464 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" exitCode=0 Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.221558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} Jan 28 19:39:55 crc kubenswrapper[4985]: I0128 19:39:55.222643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} Jan 28 19:39:57 crc kubenswrapper[4985]: I0128 19:39:57.264956 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:39:57 crc kubenswrapper[4985]: E0128 19:39:57.266017 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:05 crc kubenswrapper[4985]: I0128 19:40:05.349421 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" exitCode=0 Jan 28 19:40:05 crc kubenswrapper[4985]: I0128 19:40:05.349855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} Jan 28 19:40:06 crc kubenswrapper[4985]: I0128 19:40:06.364384 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerStarted","Data":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} Jan 28 19:40:06 crc kubenswrapper[4985]: I0128 19:40:06.398201 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-x7mbz" podStartSLOduration=2.8362278610000002 podStartE2EDuration="14.39818036s" podCreationTimestamp="2026-01-28 19:39:52 +0000 UTC" firstStartedPulling="2026-01-28 19:39:54.206869034 +0000 UTC m=+5205.033431855" lastFinishedPulling="2026-01-28 19:40:05.768821533 +0000 UTC m=+5216.595384354" observedRunningTime="2026-01-28 19:40:06.386292924 +0000 UTC m=+5217.212855745" watchObservedRunningTime="2026-01-28 19:40:06.39818036 +0000 UTC m=+5217.224743171" Jan 28 19:40:08 crc kubenswrapper[4985]: I0128 19:40:08.266801 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:08 crc kubenswrapper[4985]: E0128 19:40:08.267468 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:12 crc kubenswrapper[4985]: I0128 19:40:12.759878 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:12 crc kubenswrapper[4985]: I0128 19:40:12.760186 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:13 crc kubenswrapper[4985]: I0128 19:40:13.821050 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:40:13 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:40:13 crc kubenswrapper[4985]: > Jan 28 19:40:21 crc kubenswrapper[4985]: I0128 19:40:21.271470 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:21 crc kubenswrapper[4985]: E0128 19:40:21.272199 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:23 crc kubenswrapper[4985]: I0128 19:40:23.805641 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:40:23 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:40:23 crc kubenswrapper[4985]: > Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.264471 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:32 crc kubenswrapper[4985]: E0128 19:40:32.265179 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.810766 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:32 crc kubenswrapper[4985]: I0128 19:40:32.871646 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:33 crc kubenswrapper[4985]: I0128 19:40:33.053411 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:34 crc kubenswrapper[4985]: I0128 19:40:34.701885 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-x7mbz" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" containerID="cri-o://dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" gracePeriod=2 Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.286542 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.403790 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.404030 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.404078 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") pod \"c8200781-f798-46b5-bebe-e2703093cc9a\" (UID: \"c8200781-f798-46b5-bebe-e2703093cc9a\") " Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.405948 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities" (OuterVolumeSpecName: "utilities") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.407230 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.411490 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9" (OuterVolumeSpecName: "kube-api-access-xgrr9") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "kube-api-access-xgrr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.509069 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgrr9\" (UniqueName: \"kubernetes.io/projected/c8200781-f798-46b5-bebe-e2703093cc9a-kube-api-access-xgrr9\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.553764 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c8200781-f798-46b5-bebe-e2703093cc9a" (UID: "c8200781-f798-46b5-bebe-e2703093cc9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.611986 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c8200781-f798-46b5-bebe-e2703093cc9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712766 4985 generic.go:334] "Generic (PLEG): container finished" podID="c8200781-f798-46b5-bebe-e2703093cc9a" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" exitCode=0 Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712813 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712831 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-x7mbz" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712863 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-x7mbz" event={"ID":"c8200781-f798-46b5-bebe-e2703093cc9a","Type":"ContainerDied","Data":"c060460c2a544abd567f16e8afbf161a027f4391d1a578f020dab8a0d2e7a75e"} Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.712884 4985 scope.go:117] "RemoveContainer" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.746083 4985 scope.go:117] "RemoveContainer" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.769017 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.782986 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-x7mbz"] Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.784475 4985 scope.go:117] "RemoveContainer" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.832623 4985 scope.go:117] "RemoveContainer" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.833046 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": container with ID starting with dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344 not found: ID does not exist" containerID="dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833082 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344"} err="failed to get container status \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": rpc error: code = NotFound desc = could not find container \"dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344\": container with ID starting with dec990c0b29ead0f895a649be3708418cb046b79e118ecc4b8a4a4bfcceb7344 not found: ID does not exist" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833106 4985 scope.go:117] "RemoveContainer" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.833701 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": container with ID starting with 9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e not found: ID does not exist" containerID="9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833790 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e"} err="failed to get container status \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": rpc error: code = NotFound desc = could not find container \"9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e\": container with ID starting with 9279ef438d3bf7a6510b2638a3d7e6bab08d50f7cf09b055b91662f48af96d7e not found: ID does not exist" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.833818 4985 scope.go:117] "RemoveContainer" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: E0128 19:40:35.834192 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": container with ID starting with f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7 not found: ID does not exist" containerID="f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7" Jan 28 19:40:35 crc kubenswrapper[4985]: I0128 19:40:35.834207 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7"} err="failed to get container status \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": rpc error: code = NotFound desc = could not find container \"f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7\": container with ID starting with f196bca722bb0c0344803ad6fc009c3905426a668434eb0868a521a8267da1e7 not found: ID does not exist" Jan 28 19:40:37 crc kubenswrapper[4985]: I0128 19:40:37.279913 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" path="/var/lib/kubelet/pods/c8200781-f798-46b5-bebe-e2703093cc9a/volumes" Jan 28 19:40:43 crc kubenswrapper[4985]: I0128 19:40:43.266531 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:43 crc kubenswrapper[4985]: E0128 19:40:43.267964 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:40:57 crc kubenswrapper[4985]: I0128 19:40:57.264626 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:40:57 crc kubenswrapper[4985]: E0128 19:40:57.265369 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:11 crc kubenswrapper[4985]: I0128 19:41:11.277764 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:11 crc kubenswrapper[4985]: E0128 19:41:11.278800 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:22 crc kubenswrapper[4985]: I0128 19:41:22.264445 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:22 crc kubenswrapper[4985]: E0128 19:41:22.265295 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.958938 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960392 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-content" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960415 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-content" Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960494 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960510 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: E0128 19:41:26.960532 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-utilities" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.960543 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="extract-utilities" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.961011 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8200781-f798-46b5-bebe-e2703093cc9a" containerName="registry-server" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.963579 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:26 crc kubenswrapper[4985]: I0128 19:41:26.973247 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.095787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.095948 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.096110 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199432 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199519 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.199613 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.200260 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.200372 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.220348 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"redhat-marketplace-9c226\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.305228 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:27 crc kubenswrapper[4985]: I0128 19:41:27.799291 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.373424 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" exitCode=0 Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.374026 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12"} Jan 28 19:41:28 crc kubenswrapper[4985]: I0128 19:41:28.374068 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerStarted","Data":"e5889886528cc2c62aea92e10443213f833c70743da932f4366fbe7ae812ac86"} Jan 28 19:41:30 crc kubenswrapper[4985]: I0128 19:41:30.420763 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" exitCode=0 Jan 28 19:41:30 crc kubenswrapper[4985]: I0128 19:41:30.420837 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1"} Jan 28 19:41:31 crc kubenswrapper[4985]: I0128 19:41:31.438773 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerStarted","Data":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} Jan 28 19:41:31 crc kubenswrapper[4985]: I0128 19:41:31.466370 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9c226" podStartSLOduration=3.011869414 podStartE2EDuration="5.466341302s" podCreationTimestamp="2026-01-28 19:41:26 +0000 UTC" firstStartedPulling="2026-01-28 19:41:28.376162296 +0000 UTC m=+5299.202725117" lastFinishedPulling="2026-01-28 19:41:30.830634164 +0000 UTC m=+5301.657197005" observedRunningTime="2026-01-28 19:41:31.457344097 +0000 UTC m=+5302.283906988" watchObservedRunningTime="2026-01-28 19:41:31.466341302 +0000 UTC m=+5302.292904153" Jan 28 19:41:36 crc kubenswrapper[4985]: I0128 19:41:36.265341 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:36 crc kubenswrapper[4985]: E0128 19:41:36.266357 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.305421 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.305745 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.385089 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.550418 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:37 crc kubenswrapper[4985]: I0128 19:41:37.634833 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:39 crc kubenswrapper[4985]: I0128 19:41:39.523769 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9c226" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" containerID="cri-o://65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" gracePeriod=2 Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.093988 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227612 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227696 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.227742 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") pod \"0c879773-1159-4057-9025-6b6903d4dddc\" (UID: \"0c879773-1159-4057-9025-6b6903d4dddc\") " Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.228744 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities" (OuterVolumeSpecName: "utilities") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.235631 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn" (OuterVolumeSpecName: "kube-api-access-tg2jn") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "kube-api-access-tg2jn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.329627 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tg2jn\" (UniqueName: \"kubernetes.io/projected/0c879773-1159-4057-9025-6b6903d4dddc-kube-api-access-tg2jn\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.329971 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.340994 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0c879773-1159-4057-9025-6b6903d4dddc" (UID: "0c879773-1159-4057-9025-6b6903d4dddc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.431661 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0c879773-1159-4057-9025-6b6903d4dddc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535528 4985 generic.go:334] "Generic (PLEG): container finished" podID="0c879773-1159-4057-9025-6b6903d4dddc" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" exitCode=0 Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535574 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535603 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9c226" event={"ID":"0c879773-1159-4057-9025-6b6903d4dddc","Type":"ContainerDied","Data":"e5889886528cc2c62aea92e10443213f833c70743da932f4366fbe7ae812ac86"} Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535624 4985 scope.go:117] "RemoveContainer" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.535775 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9c226" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.570433 4985 scope.go:117] "RemoveContainer" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.600573 4985 scope.go:117] "RemoveContainer" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.600763 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.610791 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9c226"] Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.659824 4985 scope.go:117] "RemoveContainer" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.660625 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": container with ID starting with 65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035 not found: ID does not exist" containerID="65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.660675 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035"} err="failed to get container status \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": rpc error: code = NotFound desc = could not find container \"65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035\": container with ID starting with 65bb3277c2c648ea4131c7e5e7bba835450a7f9fdc37180ebb27805c215ed035 not found: ID does not exist" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.660699 4985 scope.go:117] "RemoveContainer" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.661285 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": container with ID starting with 568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1 not found: ID does not exist" containerID="568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661339 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1"} err="failed to get container status \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": rpc error: code = NotFound desc = could not find container \"568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1\": container with ID starting with 568435cff6a109af5f69d8b5fca929822f8b049342b52e0226c23f064a346bb1 not found: ID does not exist" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661371 4985 scope.go:117] "RemoveContainer" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: E0128 19:41:40.661640 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": container with ID starting with f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12 not found: ID does not exist" containerID="f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12" Jan 28 19:41:40 crc kubenswrapper[4985]: I0128 19:41:40.661711 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12"} err="failed to get container status \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": rpc error: code = NotFound desc = could not find container \"f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12\": container with ID starting with f9b5166af8cd232a3bd2b38ecab9b6fff8091b5c68313fe23f507187117cea12 not found: ID does not exist" Jan 28 19:41:41 crc kubenswrapper[4985]: I0128 19:41:41.288749 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c879773-1159-4057-9025-6b6903d4dddc" path="/var/lib/kubelet/pods/0c879773-1159-4057-9025-6b6903d4dddc/volumes" Jan 28 19:41:48 crc kubenswrapper[4985]: I0128 19:41:48.265730 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:41:48 crc kubenswrapper[4985]: I0128 19:41:48.663541 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.703661 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704818 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-utilities" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704840 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-utilities" Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704858 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704866 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: E0128 19:43:17.704887 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-content" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.704900 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="extract-content" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.705542 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c879773-1159-4057-9025-6b6903d4dddc" containerName="registry-server" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.709430 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.720953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.721311 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.721935 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.722854 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825517 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825623 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.825762 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.826640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.826733 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:17 crc kubenswrapper[4985]: I0128 19:43:17.887535 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"certified-operators-8kdfx\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.038460 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.576697 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917662 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70" exitCode=0 Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70"} Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.917941 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"4b1bcbca2155115d965173a4aa8738794325cf386b7456e68f57d25f66a42f5b"} Jan 28 19:43:18 crc kubenswrapper[4985]: I0128 19:43:18.919960 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:43:19 crc kubenswrapper[4985]: I0128 19:43:19.930735 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5"} Jan 28 19:43:22 crc kubenswrapper[4985]: I0128 19:43:22.971622 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5" exitCode=0 Jan 28 19:43:22 crc kubenswrapper[4985]: I0128 19:43:22.971713 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5"} Jan 28 19:43:26 crc kubenswrapper[4985]: I0128 19:43:26.003333 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerStarted","Data":"516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb"} Jan 28 19:43:26 crc kubenswrapper[4985]: I0128 19:43:26.031620 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8kdfx" podStartSLOduration=2.494810002 podStartE2EDuration="9.031598374s" podCreationTimestamp="2026-01-28 19:43:17 +0000 UTC" firstStartedPulling="2026-01-28 19:43:18.919586097 +0000 UTC m=+5409.746148928" lastFinishedPulling="2026-01-28 19:43:25.456374439 +0000 UTC m=+5416.282937300" observedRunningTime="2026-01-28 19:43:26.021709974 +0000 UTC m=+5416.848272795" watchObservedRunningTime="2026-01-28 19:43:26.031598374 +0000 UTC m=+5416.858161205" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.055763 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.058601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:28 crc kubenswrapper[4985]: I0128 19:43:28.126533 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:30 crc kubenswrapper[4985]: I0128 19:43:30.166971 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:30 crc kubenswrapper[4985]: I0128 19:43:30.227097 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:32 crc kubenswrapper[4985]: I0128 19:43:32.116124 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8kdfx" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" containerID="cri-o://516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" gracePeriod=2 Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.134926 4985 generic.go:334] "Generic (PLEG): container finished" podID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerID="516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" exitCode=0 Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.135119 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb"} Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.337435 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357496 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357648 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.357746 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") pod \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\" (UID: \"1bd75f3d-baf4-4a14-bf0a-182f76c18de8\") " Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.359808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities" (OuterVolumeSpecName: "utilities") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.382472 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j" (OuterVolumeSpecName: "kube-api-access-28p2j") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "kube-api-access-28p2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.437339 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1bd75f3d-baf4-4a14-bf0a-182f76c18de8" (UID: "1bd75f3d-baf4-4a14-bf0a-182f76c18de8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461664 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461698 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-28p2j\" (UniqueName: \"kubernetes.io/projected/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-kube-api-access-28p2j\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:33 crc kubenswrapper[4985]: I0128 19:43:33.461708 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1bd75f3d-baf4-4a14-bf0a-182f76c18de8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.151743 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8kdfx" event={"ID":"1bd75f3d-baf4-4a14-bf0a-182f76c18de8","Type":"ContainerDied","Data":"4b1bcbca2155115d965173a4aa8738794325cf386b7456e68f57d25f66a42f5b"} Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.152148 4985 scope.go:117] "RemoveContainer" containerID="516a0a94489f3eda26f25eb3e6179077c3ae29f6ff3f349e8e7123d4ec5356bb" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.151878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8kdfx" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.190029 4985 scope.go:117] "RemoveContainer" containerID="5342aa70327afa7d8c40c750487da67a40528b4412aa1733a682408974326cc5" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.234010 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.243862 4985 scope.go:117] "RemoveContainer" containerID="53cf309ed2f40b50e2f28902eb3196f215436b1a3ae84c2cb6de2fc4f4e68e70" Jan 28 19:43:34 crc kubenswrapper[4985]: I0128 19:43:34.252145 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8kdfx"] Jan 28 19:43:35 crc kubenswrapper[4985]: I0128 19:43:35.290556 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" path="/var/lib/kubelet/pods/1bd75f3d-baf4-4a14-bf0a-182f76c18de8/volumes" Jan 28 19:44:11 crc kubenswrapper[4985]: I0128 19:44:11.186165 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:11 crc kubenswrapper[4985]: I0128 19:44:11.186849 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:44:41 crc kubenswrapper[4985]: I0128 19:44:41.186572 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:44:41 crc kubenswrapper[4985]: I0128 19:44:41.187356 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.151961 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.152958 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.152971 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.152989 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-utilities" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.152997 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-utilities" Jan 28 19:45:00 crc kubenswrapper[4985]: E0128 19:45:00.153039 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-content" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.153045 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="extract-content" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.153279 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1bd75f3d-baf4-4a14-bf0a-182f76c18de8" containerName="registry-server" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.154011 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.156678 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.156893 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.166947 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195757 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195850 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.195892 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.298908 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.299017 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.299278 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.300495 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.311110 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.333629 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"collect-profiles-29493825-k5pt9\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:00 crc kubenswrapper[4985]: I0128 19:45:00.475484 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.011949 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9"] Jan 28 19:45:01 crc kubenswrapper[4985]: W0128 19:45:01.012855 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73b1d5c3_055f_41c9_aae7_f397142ddf05.slice/crio-b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c WatchSource:0}: Error finding container b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c: Status 404 returned error can't find the container with id b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.223888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerStarted","Data":"db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d"} Jan 28 19:45:01 crc kubenswrapper[4985]: I0128 19:45:01.224190 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerStarted","Data":"b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c"} Jan 28 19:45:02 crc kubenswrapper[4985]: I0128 19:45:02.256170 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" podStartSLOduration=2.256147093 podStartE2EDuration="2.256147093s" podCreationTimestamp="2026-01-28 19:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 19:45:02.251132941 +0000 UTC m=+5513.077695762" watchObservedRunningTime="2026-01-28 19:45:02.256147093 +0000 UTC m=+5513.082709914" Jan 28 19:45:03 crc kubenswrapper[4985]: I0128 19:45:03.285303 4985 generic.go:334] "Generic (PLEG): container finished" podID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerID="db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d" exitCode=0 Jan 28 19:45:03 crc kubenswrapper[4985]: I0128 19:45:03.289019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerDied","Data":"db2846bf6da7236873840864c39f40024962e4f67507dcec60b63c320c36883d"} Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.753197 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.843723 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.843954 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.844104 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") pod \"73b1d5c3-055f-41c9-aae7-f397142ddf05\" (UID: \"73b1d5c3-055f-41c9-aae7-f397142ddf05\") " Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.844962 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume" (OuterVolumeSpecName: "config-volume") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.845613 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/73b1d5c3-055f-41c9-aae7-f397142ddf05-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.850896 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc" (OuterVolumeSpecName: "kube-api-access-b8bxc") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "kube-api-access-b8bxc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.864983 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "73b1d5c3-055f-41c9-aae7-f397142ddf05" (UID: "73b1d5c3-055f-41c9-aae7-f397142ddf05"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.947669 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b8bxc\" (UniqueName: \"kubernetes.io/projected/73b1d5c3-055f-41c9-aae7-f397142ddf05-kube-api-access-b8bxc\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:04 crc kubenswrapper[4985]: I0128 19:45:04.947970 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/73b1d5c3-055f-41c9-aae7-f397142ddf05-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.312769 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" event={"ID":"73b1d5c3-055f-41c9-aae7-f397142ddf05","Type":"ContainerDied","Data":"b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c"} Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.313066 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5ec9e22a34d8fb7f9784556db1f19a5b3e065c5522fbe347628ef3fdba9655c" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.312826 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493825-k5pt9" Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.880116 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:45:05 crc kubenswrapper[4985]: I0128 19:45:05.894309 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493780-v4zzw"] Jan 28 19:45:07 crc kubenswrapper[4985]: I0128 19:45:07.279308 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1" path="/var/lib/kubelet/pods/322cdd3d-ab37-4ddd-bece-51e6b4d0b3b1/volumes" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.186879 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.189928 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.190232 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.192078 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.192459 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" gracePeriod=600 Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.381757 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" exitCode=0 Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.381814 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d"} Jan 28 19:45:11 crc kubenswrapper[4985]: I0128 19:45:11.382151 4985 scope.go:117] "RemoveContainer" containerID="d61d9b9540c19ee637ed548c89de998f3fd24e3ce02e7359584b30ca2eedf15e" Jan 28 19:45:12 crc kubenswrapper[4985]: I0128 19:45:12.396708 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} Jan 28 19:45:16 crc kubenswrapper[4985]: E0128 19:45:16.734825 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:56152->38.102.83.195:43365: write tcp 38.102.83.195:56152->38.102.83.195:43365: write: broken pipe Jan 28 19:45:43 crc kubenswrapper[4985]: I0128 19:45:43.275089 4985 scope.go:117] "RemoveContainer" containerID="fc36e8e83ce2dcdbad3b7ac3097968106477e97a9a58431ad0304a2bcaebdce7" Jan 28 19:46:57 crc kubenswrapper[4985]: E0128 19:46:57.488853 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:45024->38.102.83.195:43365: write tcp 38.102.83.195:45024->38.102.83.195:43365: write: broken pipe Jan 28 19:47:11 crc kubenswrapper[4985]: I0128 19:47:11.186092 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:47:11 crc kubenswrapper[4985]: I0128 19:47:11.187352 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:47:41 crc kubenswrapper[4985]: I0128 19:47:41.186029 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:47:41 crc kubenswrapper[4985]: I0128 19:47:41.186626 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186019 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.186723 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.188096 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.188210 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" gracePeriod=600 Jan 28 19:48:11 crc kubenswrapper[4985]: E0128 19:48:11.308325 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653013 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" exitCode=0 Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653309 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c"} Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.653347 4985 scope.go:117] "RemoveContainer" containerID="a58511ae9f9eb92282ccb7faeceba6f13dffb55230695606dbf4a2da5b886b0d" Jan 28 19:48:11 crc kubenswrapper[4985]: I0128 19:48:11.654203 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:11 crc kubenswrapper[4985]: E0128 19:48:11.654558 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:25 crc kubenswrapper[4985]: I0128 19:48:25.264049 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:25 crc kubenswrapper[4985]: E0128 19:48:25.264945 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:36 crc kubenswrapper[4985]: I0128 19:48:36.265076 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:36 crc kubenswrapper[4985]: E0128 19:48:36.266035 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:48:50 crc kubenswrapper[4985]: I0128 19:48:50.263844 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:48:50 crc kubenswrapper[4985]: E0128 19:48:50.264923 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.042831 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:01 crc kubenswrapper[4985]: E0128 19:49:01.043979 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.043992 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.044268 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="73b1d5c3-055f-41c9-aae7-f397142ddf05" containerName="collect-profiles" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.047285 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.047372 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099755 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.099963 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.201974 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202059 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202234 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202651 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.202763 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.225559 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"community-operators-xrfq5\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.381849 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:01 crc kubenswrapper[4985]: I0128 19:49:01.964964 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.432765 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" exitCode=0 Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.432856 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f"} Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.433108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"7231155381c11c3d4badfe2b0a0f3ce79d9af0ba702743f05c6d0732113049c6"} Jan 28 19:49:02 crc kubenswrapper[4985]: I0128 19:49:02.436880 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 19:49:03 crc kubenswrapper[4985]: I0128 19:49:03.448577 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} Jan 28 19:49:04 crc kubenswrapper[4985]: I0128 19:49:04.264466 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:04 crc kubenswrapper[4985]: E0128 19:49:04.265068 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:06 crc kubenswrapper[4985]: I0128 19:49:06.487991 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" exitCode=0 Jan 28 19:49:06 crc kubenswrapper[4985]: I0128 19:49:06.488057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} Jan 28 19:49:08 crc kubenswrapper[4985]: I0128 19:49:08.525608 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerStarted","Data":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} Jan 28 19:49:08 crc kubenswrapper[4985]: I0128 19:49:08.564748 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-xrfq5" podStartSLOduration=4.100023969 podStartE2EDuration="8.564723289s" podCreationTimestamp="2026-01-28 19:49:00 +0000 UTC" firstStartedPulling="2026-01-28 19:49:02.436465013 +0000 UTC m=+5753.263027864" lastFinishedPulling="2026-01-28 19:49:06.901164323 +0000 UTC m=+5757.727727184" observedRunningTime="2026-01-28 19:49:08.555956411 +0000 UTC m=+5759.382519252" watchObservedRunningTime="2026-01-28 19:49:08.564723289 +0000 UTC m=+5759.391286150" Jan 28 19:49:11 crc kubenswrapper[4985]: I0128 19:49:11.382452 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:11 crc kubenswrapper[4985]: I0128 19:49:11.382854 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:12 crc kubenswrapper[4985]: I0128 19:49:12.449345 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-xrfq5" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" probeResult="failure" output=< Jan 28 19:49:12 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:49:12 crc kubenswrapper[4985]: > Jan 28 19:49:17 crc kubenswrapper[4985]: I0128 19:49:17.265080 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:17 crc kubenswrapper[4985]: E0128 19:49:17.266109 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.441090 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.500111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:21 crc kubenswrapper[4985]: I0128 19:49:21.689245 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:22 crc kubenswrapper[4985]: I0128 19:49:22.688982 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-xrfq5" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" containerID="cri-o://79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" gracePeriod=2 Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.175045 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.234903 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.235478 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.235557 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") pod \"f40cb468-52d9-418f-ae6e-f1262531b85a\" (UID: \"f40cb468-52d9-418f-ae6e-f1262531b85a\") " Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.238114 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities" (OuterVolumeSpecName: "utilities") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.252797 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n" (OuterVolumeSpecName: "kube-api-access-6v94n") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "kube-api-access-6v94n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.308472 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f40cb468-52d9-418f-ae6e-f1262531b85a" (UID: "f40cb468-52d9-418f-ae6e-f1262531b85a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340549 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340581 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6v94n\" (UniqueName: \"kubernetes.io/projected/f40cb468-52d9-418f-ae6e-f1262531b85a-kube-api-access-6v94n\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.340592 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f40cb468-52d9-418f-ae6e-f1262531b85a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705631 4985 generic.go:334] "Generic (PLEG): container finished" podID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" exitCode=0 Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705688 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705719 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-xrfq5" event={"ID":"f40cb468-52d9-418f-ae6e-f1262531b85a","Type":"ContainerDied","Data":"7231155381c11c3d4badfe2b0a0f3ce79d9af0ba702743f05c6d0732113049c6"} Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705727 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-xrfq5" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.705741 4985 scope.go:117] "RemoveContainer" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.731905 4985 scope.go:117] "RemoveContainer" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.763162 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.777787 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-xrfq5"] Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.785636 4985 scope.go:117] "RemoveContainer" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840273 4985 scope.go:117] "RemoveContainer" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.840828 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": container with ID starting with 79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99 not found: ID does not exist" containerID="79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840889 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99"} err="failed to get container status \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": rpc error: code = NotFound desc = could not find container \"79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99\": container with ID starting with 79b1026617f713263bd43c67a32842b7ab1b65499f7926a32490e896e746ef99 not found: ID does not exist" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.840915 4985 scope.go:117] "RemoveContainer" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.841324 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": container with ID starting with 5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db not found: ID does not exist" containerID="5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841394 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db"} err="failed to get container status \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": rpc error: code = NotFound desc = could not find container \"5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db\": container with ID starting with 5660206f8ccda53e72a818ddecf906f494416ad470d48824b65394708d9e28db not found: ID does not exist" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841436 4985 scope.go:117] "RemoveContainer" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: E0128 19:49:23.841763 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": container with ID starting with 669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f not found: ID does not exist" containerID="669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f" Jan 28 19:49:23 crc kubenswrapper[4985]: I0128 19:49:23.841799 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f"} err="failed to get container status \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": rpc error: code = NotFound desc = could not find container \"669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f\": container with ID starting with 669c2630b7e7219f34535f1dc03d5afba113615394fd5b1bf260d7a2c3bc238f not found: ID does not exist" Jan 28 19:49:25 crc kubenswrapper[4985]: I0128 19:49:25.284221 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" path="/var/lib/kubelet/pods/f40cb468-52d9-418f-ae6e-f1262531b85a/volumes" Jan 28 19:49:32 crc kubenswrapper[4985]: I0128 19:49:32.264420 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:32 crc kubenswrapper[4985]: E0128 19:49:32.266274 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:49:46 crc kubenswrapper[4985]: I0128 19:49:46.264796 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:49:46 crc kubenswrapper[4985]: E0128 19:49:46.265591 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:00 crc kubenswrapper[4985]: I0128 19:50:00.265582 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:00 crc kubenswrapper[4985]: E0128 19:50:00.268134 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:12 crc kubenswrapper[4985]: I0128 19:50:12.264197 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:12 crc kubenswrapper[4985]: E0128 19:50:12.265000 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:25 crc kubenswrapper[4985]: I0128 19:50:25.270340 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:25 crc kubenswrapper[4985]: E0128 19:50:25.271394 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.889877 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.890936 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-content" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.890955 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-content" Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.891011 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-utilities" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891023 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="extract-utilities" Jan 28 19:50:26 crc kubenswrapper[4985]: E0128 19:50:26.891040 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891048 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.891411 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="f40cb468-52d9-418f-ae6e-f1262531b85a" containerName="registry-server" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.897681 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:26 crc kubenswrapper[4985]: I0128 19:50:26.903427 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.005741 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.006077 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.006101 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107732 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107778 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.107949 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.108334 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.108435 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.730902 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"redhat-operators-5lfg8\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:27 crc kubenswrapper[4985]: I0128 19:50:27.825977 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:28 crc kubenswrapper[4985]: I0128 19:50:28.348492 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:50:28 crc kubenswrapper[4985]: I0128 19:50:28.507507 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"47e61066b587de4bdb4d330875bee6c011e7fa07480ad9c2d8f5468abae1466f"} Jan 28 19:50:29 crc kubenswrapper[4985]: I0128 19:50:29.517524 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" exitCode=0 Jan 28 19:50:29 crc kubenswrapper[4985]: I0128 19:50:29.517770 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc"} Jan 28 19:50:30 crc kubenswrapper[4985]: I0128 19:50:30.548475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} Jan 28 19:50:37 crc kubenswrapper[4985]: I0128 19:50:37.264513 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:37 crc kubenswrapper[4985]: E0128 19:50:37.265423 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:38 crc kubenswrapper[4985]: I0128 19:50:38.532981 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" exitCode=0 Jan 28 19:50:38 crc kubenswrapper[4985]: I0128 19:50:38.533312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} Jan 28 19:50:40 crc kubenswrapper[4985]: I0128 19:50:40.566039 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerStarted","Data":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} Jan 28 19:50:40 crc kubenswrapper[4985]: I0128 19:50:40.604988 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-5lfg8" podStartSLOduration=5.158449258 podStartE2EDuration="14.604963937s" podCreationTimestamp="2026-01-28 19:50:26 +0000 UTC" firstStartedPulling="2026-01-28 19:50:29.520049894 +0000 UTC m=+5840.346612715" lastFinishedPulling="2026-01-28 19:50:38.966564533 +0000 UTC m=+5849.793127394" observedRunningTime="2026-01-28 19:50:40.592101373 +0000 UTC m=+5851.418664234" watchObservedRunningTime="2026-01-28 19:50:40.604963937 +0000 UTC m=+5851.431526758" Jan 28 19:50:47 crc kubenswrapper[4985]: I0128 19:50:47.826230 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:47 crc kubenswrapper[4985]: I0128 19:50:47.826875 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:50:48 crc kubenswrapper[4985]: I0128 19:50:48.895741 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" probeResult="failure" output=< Jan 28 19:50:48 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:50:48 crc kubenswrapper[4985]: > Jan 28 19:50:49 crc kubenswrapper[4985]: I0128 19:50:49.264831 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:50:49 crc kubenswrapper[4985]: E0128 19:50:49.265590 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:50:58 crc kubenswrapper[4985]: I0128 19:50:58.899791 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" probeResult="failure" output=< Jan 28 19:50:58 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 19:50:58 crc kubenswrapper[4985]: > Jan 28 19:51:00 crc kubenswrapper[4985]: I0128 19:51:00.264804 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:00 crc kubenswrapper[4985]: E0128 19:51:00.266190 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:07 crc kubenswrapper[4985]: I0128 19:51:07.918628 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.006129 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.171354 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:08 crc kubenswrapper[4985]: I0128 19:51:08.995793 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-5lfg8" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" containerID="cri-o://5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" gracePeriod=2 Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.677027 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.799890 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.800024 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.800136 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") pod \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\" (UID: \"7409f2a2-14dd-4bd9-9b0d-68d468d7a036\") " Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.801652 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities" (OuterVolumeSpecName: "utilities") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.811293 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42" (OuterVolumeSpecName: "kube-api-access-knb42") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "kube-api-access-knb42". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.902797 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-knb42\" (UniqueName: \"kubernetes.io/projected/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-kube-api-access-knb42\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.902834 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:09 crc kubenswrapper[4985]: I0128 19:51:09.917194 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7409f2a2-14dd-4bd9-9b0d-68d468d7a036" (UID: "7409f2a2-14dd-4bd9-9b0d-68d468d7a036"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.006116 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7409f2a2-14dd-4bd9-9b0d-68d468d7a036-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014621 4985 generic.go:334] "Generic (PLEG): container finished" podID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" exitCode=0 Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014678 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014746 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-5lfg8" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014776 4985 scope.go:117] "RemoveContainer" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.014760 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-5lfg8" event={"ID":"7409f2a2-14dd-4bd9-9b0d-68d468d7a036","Type":"ContainerDied","Data":"47e61066b587de4bdb4d330875bee6c011e7fa07480ad9c2d8f5468abae1466f"} Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.061304 4985 scope.go:117] "RemoveContainer" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.093859 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.111172 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-5lfg8"] Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.120694 4985 scope.go:117] "RemoveContainer" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.169555 4985 scope.go:117] "RemoveContainer" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.170126 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": container with ID starting with 5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0 not found: ID does not exist" containerID="5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170171 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0"} err="failed to get container status \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": rpc error: code = NotFound desc = could not find container \"5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0\": container with ID starting with 5f2621bc4b0f83d54e5cc75144ede39a20f20d72fa06e8afca5b534a6b1809e0 not found: ID does not exist" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170197 4985 scope.go:117] "RemoveContainer" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.170735 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": container with ID starting with 424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070 not found: ID does not exist" containerID="424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170802 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070"} err="failed to get container status \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": rpc error: code = NotFound desc = could not find container \"424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070\": container with ID starting with 424e0291690b5344cc49ad34e7276c77cba1c197664565a27b7f2edb5b050070 not found: ID does not exist" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.170840 4985 scope.go:117] "RemoveContainer" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: E0128 19:51:10.171721 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": container with ID starting with e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc not found: ID does not exist" containerID="e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc" Jan 28 19:51:10 crc kubenswrapper[4985]: I0128 19:51:10.171758 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc"} err="failed to get container status \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": rpc error: code = NotFound desc = could not find container \"e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc\": container with ID starting with e49792cba31d1ccbf561115c6fb34384ab26cf5bb42cb18ba24d834890eac3cc not found: ID does not exist" Jan 28 19:51:11 crc kubenswrapper[4985]: I0128 19:51:11.288536 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" path="/var/lib/kubelet/pods/7409f2a2-14dd-4bd9-9b0d-68d468d7a036/volumes" Jan 28 19:51:13 crc kubenswrapper[4985]: I0128 19:51:13.263920 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:13 crc kubenswrapper[4985]: E0128 19:51:13.264228 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:28 crc kubenswrapper[4985]: I0128 19:51:28.264190 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:28 crc kubenswrapper[4985]: E0128 19:51:28.265117 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:40 crc kubenswrapper[4985]: I0128 19:51:40.264849 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:40 crc kubenswrapper[4985]: E0128 19:51:40.266019 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:51:52 crc kubenswrapper[4985]: I0128 19:51:52.264491 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:51:52 crc kubenswrapper[4985]: E0128 19:51:52.265573 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:05 crc kubenswrapper[4985]: I0128 19:52:05.264567 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:05 crc kubenswrapper[4985]: E0128 19:52:05.265686 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.434728 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435572 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435582 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435617 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-utilities" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435623 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-utilities" Jan 28 19:52:10 crc kubenswrapper[4985]: E0128 19:52:10.435640 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-content" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435646 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="extract-content" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.435851 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7409f2a2-14dd-4bd9-9b0d-68d468d7a036" containerName="registry-server" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.437408 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.439050 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591561 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591640 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.591737 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.694507 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.694975 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695160 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695706 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.695792 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.715847 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"redhat-marketplace-q8696\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:10 crc kubenswrapper[4985]: I0128 19:52:10.781313 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.357141 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985005 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" exitCode=0 Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985071 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a"} Jan 28 19:52:11 crc kubenswrapper[4985]: I0128 19:52:11.985375 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerStarted","Data":"513bc7b6059f1e0b7811ca2a6e846ab89a1bd700812e1eb8437574fc3b92572e"} Jan 28 19:52:14 crc kubenswrapper[4985]: I0128 19:52:14.023537 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" exitCode=0 Jan 28 19:52:14 crc kubenswrapper[4985]: I0128 19:52:14.023605 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe"} Jan 28 19:52:15 crc kubenswrapper[4985]: I0128 19:52:15.038401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerStarted","Data":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} Jan 28 19:52:15 crc kubenswrapper[4985]: I0128 19:52:15.073144 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-q8696" podStartSLOduration=2.557558266 podStartE2EDuration="5.073118833s" podCreationTimestamp="2026-01-28 19:52:10 +0000 UTC" firstStartedPulling="2026-01-28 19:52:11.987386144 +0000 UTC m=+5942.813948975" lastFinishedPulling="2026-01-28 19:52:14.502946731 +0000 UTC m=+5945.329509542" observedRunningTime="2026-01-28 19:52:15.062279896 +0000 UTC m=+5945.888842757" watchObservedRunningTime="2026-01-28 19:52:15.073118833 +0000 UTC m=+5945.899681684" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.264145 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:20 crc kubenswrapper[4985]: E0128 19:52:20.265010 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.782510 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.782602 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:20 crc kubenswrapper[4985]: I0128 19:52:20.860132 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:21 crc kubenswrapper[4985]: I0128 19:52:21.164398 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:21 crc kubenswrapper[4985]: I0128 19:52:21.228986 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.128713 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-q8696" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" containerID="cri-o://b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" gracePeriod=2 Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.679183 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851140 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851535 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.851588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") pod \"ad73e021-615d-4c78-926e-af3b8812da9c\" (UID: \"ad73e021-615d-4c78-926e-af3b8812da9c\") " Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.852599 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities" (OuterVolumeSpecName: "utilities") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.860496 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n" (OuterVolumeSpecName: "kube-api-access-m8d7n") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "kube-api-access-m8d7n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.875751 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad73e021-615d-4c78-926e-af3b8812da9c" (UID: "ad73e021-615d-4c78-926e-af3b8812da9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955003 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955043 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad73e021-615d-4c78-926e-af3b8812da9c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:23 crc kubenswrapper[4985]: I0128 19:52:23.955061 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8d7n\" (UniqueName: \"kubernetes.io/projected/ad73e021-615d-4c78-926e-af3b8812da9c-kube-api-access-m8d7n\") on node \"crc\" DevicePath \"\"" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150694 4985 generic.go:334] "Generic (PLEG): container finished" podID="ad73e021-615d-4c78-926e-af3b8812da9c" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" exitCode=0 Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150736 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150764 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-q8696" event={"ID":"ad73e021-615d-4c78-926e-af3b8812da9c","Type":"ContainerDied","Data":"513bc7b6059f1e0b7811ca2a6e846ab89a1bd700812e1eb8437574fc3b92572e"} Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150783 4985 scope.go:117] "RemoveContainer" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.150797 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-q8696" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.183437 4985 scope.go:117] "RemoveContainer" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.233129 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.238289 4985 scope.go:117] "RemoveContainer" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.261863 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-q8696"] Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.325796 4985 scope.go:117] "RemoveContainer" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.326212 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": container with ID starting with b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace not found: ID does not exist" containerID="b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326243 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace"} err="failed to get container status \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": rpc error: code = NotFound desc = could not find container \"b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace\": container with ID starting with b9b2fd7fd1a164628c333d4cb36d5dd15976e8c61757c1f8b7ba1227aa965ace not found: ID does not exist" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326274 4985 scope.go:117] "RemoveContainer" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.326599 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": container with ID starting with 763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe not found: ID does not exist" containerID="763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326628 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe"} err="failed to get container status \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": rpc error: code = NotFound desc = could not find container \"763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe\": container with ID starting with 763fdfcbcd6b57e7eabf89bcf421033c8cd94bf1cf9d1d388d03a36c2c5a5dfe not found: ID does not exist" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.326648 4985 scope.go:117] "RemoveContainer" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: E0128 19:52:24.327113 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": container with ID starting with b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a not found: ID does not exist" containerID="b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a" Jan 28 19:52:24 crc kubenswrapper[4985]: I0128 19:52:24.327170 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a"} err="failed to get container status \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": rpc error: code = NotFound desc = could not find container \"b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a\": container with ID starting with b211632c55b2c798a556c1d708897e7b265d99f9b7e575a3ad1bcd7d71b9eb6a not found: ID does not exist" Jan 28 19:52:25 crc kubenswrapper[4985]: I0128 19:52:25.289786 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" path="/var/lib/kubelet/pods/ad73e021-615d-4c78-926e-af3b8812da9c/volumes" Jan 28 19:52:32 crc kubenswrapper[4985]: I0128 19:52:32.263773 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:32 crc kubenswrapper[4985]: E0128 19:52:32.265127 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:45 crc kubenswrapper[4985]: I0128 19:52:45.264513 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:45 crc kubenswrapper[4985]: E0128 19:52:45.265335 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:56 crc kubenswrapper[4985]: I0128 19:52:56.265858 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:52:56 crc kubenswrapper[4985]: E0128 19:52:56.267181 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:52:59 crc kubenswrapper[4985]: E0128 19:52:59.610793 4985 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.195:43584->38.102.83.195:43365: write tcp 38.102.83.195:43584->38.102.83.195:43365: write: broken pipe Jan 28 19:53:07 crc kubenswrapper[4985]: I0128 19:53:07.264473 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:53:07 crc kubenswrapper[4985]: E0128 19:53:07.265237 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:53:21 crc kubenswrapper[4985]: I0128 19:53:21.264328 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:53:21 crc kubenswrapper[4985]: I0128 19:53:21.946419 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.129461 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.133885 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.133915 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.133969 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-utilities" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.133983 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-utilities" Jan 28 19:53:41 crc kubenswrapper[4985]: E0128 19:53:41.134010 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-content" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.134024 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="extract-content" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.134488 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad73e021-615d-4c78-926e-af3b8812da9c" containerName="registry-server" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.137846 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.143781 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176631 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176722 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.176786 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279305 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279381 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279437 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.279938 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.280122 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.307231 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"certified-operators-5ftj6\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:41 crc kubenswrapper[4985]: I0128 19:53:41.467173 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.011422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.246064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} Jan 28 19:53:42 crc kubenswrapper[4985]: I0128 19:53:42.246131 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"753d0099d3c236eea1dc82804e44c58a26c20aeba82d466d277586f3d9937bb8"} Jan 28 19:53:43 crc kubenswrapper[4985]: I0128 19:53:43.262666 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" exitCode=0 Jan 28 19:53:43 crc kubenswrapper[4985]: I0128 19:53:43.262747 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} Jan 28 19:53:44 crc kubenswrapper[4985]: I0128 19:53:44.278790 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} Jan 28 19:53:46 crc kubenswrapper[4985]: I0128 19:53:46.297830 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" exitCode=0 Jan 28 19:53:46 crc kubenswrapper[4985]: I0128 19:53:46.297948 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} Jan 28 19:53:47 crc kubenswrapper[4985]: I0128 19:53:47.308526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerStarted","Data":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} Jan 28 19:53:48 crc kubenswrapper[4985]: I0128 19:53:48.349617 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5ftj6" podStartSLOduration=3.928772177 podStartE2EDuration="7.349597424s" podCreationTimestamp="2026-01-28 19:53:41 +0000 UTC" firstStartedPulling="2026-01-28 19:53:43.266638893 +0000 UTC m=+6034.093201734" lastFinishedPulling="2026-01-28 19:53:46.68746413 +0000 UTC m=+6037.514026981" observedRunningTime="2026-01-28 19:53:48.339399065 +0000 UTC m=+6039.165961886" watchObservedRunningTime="2026-01-28 19:53:48.349597424 +0000 UTC m=+6039.176160245" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.467859 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.468361 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:51 crc kubenswrapper[4985]: I0128 19:53:51.554393 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:52 crc kubenswrapper[4985]: I0128 19:53:52.444875 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:52 crc kubenswrapper[4985]: I0128 19:53:52.500735 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:54 crc kubenswrapper[4985]: I0128 19:53:54.405052 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5ftj6" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" containerID="cri-o://adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" gracePeriod=2 Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.088476 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108074 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108348 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.108405 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") pod \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\" (UID: \"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0\") " Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.109459 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities" (OuterVolumeSpecName: "utilities") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.116544 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw" (OuterVolumeSpecName: "kube-api-access-p6zbw") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "kube-api-access-p6zbw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.210334 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.210373 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6zbw\" (UniqueName: \"kubernetes.io/projected/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-kube-api-access-p6zbw\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420745 4985 generic.go:334] "Generic (PLEG): container finished" podID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" exitCode=0 Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420831 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.421969 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5ftj6" event={"ID":"abac5e1e-6e1b-4391-b00f-2b9c2162a8b0","Type":"ContainerDied","Data":"753d0099d3c236eea1dc82804e44c58a26c20aeba82d466d277586f3d9937bb8"} Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.420873 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5ftj6" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.422003 4985 scope.go:117] "RemoveContainer" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.463892 4985 scope.go:117] "RemoveContainer" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.500174 4985 scope.go:117] "RemoveContainer" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.538385 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" (UID: "abac5e1e-6e1b-4391-b00f-2b9c2162a8b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.579189 4985 scope.go:117] "RemoveContainer" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580011 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": container with ID starting with adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be not found: ID does not exist" containerID="adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580070 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be"} err="failed to get container status \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": rpc error: code = NotFound desc = could not find container \"adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be\": container with ID starting with adff484cb23326aae01e4930c8df003d32d0e13d919f1c686fa70adb81da39be not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580102 4985 scope.go:117] "RemoveContainer" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580413 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": container with ID starting with 40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd not found: ID does not exist" containerID="40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580445 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd"} err="failed to get container status \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": rpc error: code = NotFound desc = could not find container \"40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd\": container with ID starting with 40f61d601ca4cd4990f6fc0be73bfbf0a0743b341fa251a915f9779ba8b4b5fd not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580465 4985 scope.go:117] "RemoveContainer" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: E0128 19:53:55.580720 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": container with ID starting with cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0 not found: ID does not exist" containerID="cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.580749 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0"} err="failed to get container status \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": rpc error: code = NotFound desc = could not find container \"cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0\": container with ID starting with cb48b79f620e005343e75889d14b8b517a96c91d892965299d183c4def74a3b0 not found: ID does not exist" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.621186 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.775753 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:55 crc kubenswrapper[4985]: I0128 19:53:55.786805 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5ftj6"] Jan 28 19:53:57 crc kubenswrapper[4985]: I0128 19:53:57.276355 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" path="/var/lib/kubelet/pods/abac5e1e-6e1b-4391-b00f-2b9c2162a8b0/volumes" Jan 28 19:55:41 crc kubenswrapper[4985]: I0128 19:55:41.186614 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:55:41 crc kubenswrapper[4985]: I0128 19:55:41.187210 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:11 crc kubenswrapper[4985]: I0128 19:56:11.186031 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:56:11 crc kubenswrapper[4985]: I0128 19:56:11.186573 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.187206 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.189094 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.189317 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.190550 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.190781 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" gracePeriod=600 Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.604967 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" exitCode=0 Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605007 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182"} Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605365 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} Jan 28 19:56:41 crc kubenswrapper[4985]: I0128 19:56:41.605395 4985 scope.go:117] "RemoveContainer" containerID="ee334e8e205c53af3a187dc9df7f6742a1d4450fa686282e924287af8730f46c" Jan 28 19:58:41 crc kubenswrapper[4985]: I0128 19:58:41.186846 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:58:41 crc kubenswrapper[4985]: I0128 19:58:41.187579 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:11 crc kubenswrapper[4985]: I0128 19:59:11.186045 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:59:11 crc kubenswrapper[4985]: I0128 19:59:11.186747 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186033 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186602 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.186671 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.187722 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 19:59:41 crc kubenswrapper[4985]: I0128 19:59:41.187787 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" gracePeriod=600 Jan 28 19:59:41 crc kubenswrapper[4985]: E0128 19:59:41.357467 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049415 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" exitCode=0 Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049480 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8"} Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.049809 4985 scope.go:117] "RemoveContainer" containerID="69284d970ac84e3960e3531fa9880703937f5211cd6b09b9884c28779b8c5182" Jan 28 19:59:42 crc kubenswrapper[4985]: I0128 19:59:42.050595 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 19:59:42 crc kubenswrapper[4985]: E0128 19:59:42.050942 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 19:59:54 crc kubenswrapper[4985]: I0128 19:59:54.264834 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 19:59:54 crc kubenswrapper[4985]: E0128 19:59:54.265564 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.180637 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.182894 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-content" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.182925 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-content" Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.183124 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-utilities" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183136 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="extract-utilities" Jan 28 20:00:00 crc kubenswrapper[4985]: E0128 20:00:00.183167 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183176 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.183947 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="abac5e1e-6e1b-4391-b00f-2b9c2162a8b0" containerName="registry-server" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.184956 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.187729 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.187799 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.194159 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258119 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258186 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.258209 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.362705 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.363113 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.363149 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.365114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.379489 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.379484 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"collect-profiles-29493840-j8wmf\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:00 crc kubenswrapper[4985]: I0128 20:00:00.520355 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:01 crc kubenswrapper[4985]: I0128 20:00:01.027267 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf"] Jan 28 20:00:01 crc kubenswrapper[4985]: I0128 20:00:01.317097 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerStarted","Data":"91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7"} Jan 28 20:00:02 crc kubenswrapper[4985]: I0128 20:00:02.346513 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerStarted","Data":"a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f"} Jan 28 20:00:02 crc kubenswrapper[4985]: I0128 20:00:02.400826 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" podStartSLOduration=2.400796175 podStartE2EDuration="2.400796175s" podCreationTimestamp="2026-01-28 20:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:00:02.392576972 +0000 UTC m=+6413.219139823" watchObservedRunningTime="2026-01-28 20:00:02.400796175 +0000 UTC m=+6413.227359036" Jan 28 20:00:04 crc kubenswrapper[4985]: I0128 20:00:04.373135 4985 generic.go:334] "Generic (PLEG): container finished" podID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerID="a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f" exitCode=0 Jan 28 20:00:04 crc kubenswrapper[4985]: I0128 20:00:04.373347 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerDied","Data":"a6a9bf023f54cce16eb2987d2f250a0bda2a4d180506c19b25054195daba1f4f"} Jan 28 20:00:05 crc kubenswrapper[4985]: I0128 20:00:05.875769 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.015489 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.015689 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.016447 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") pod \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\" (UID: \"a139d0d8-6583-4fe1-b693-0a3162f84c9a\") " Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.021216 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume" (OuterVolumeSpecName: "config-volume") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.028215 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.029879 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78" (OuterVolumeSpecName: "kube-api-access-s7d78") pod "a139d0d8-6583-4fe1-b693-0a3162f84c9a" (UID: "a139d0d8-6583-4fe1-b693-0a3162f84c9a"). InnerVolumeSpecName "kube-api-access-s7d78". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121303 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a139d0d8-6583-4fe1-b693-0a3162f84c9a-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121347 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a139d0d8-6583-4fe1-b693-0a3162f84c9a-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.121363 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s7d78\" (UniqueName: \"kubernetes.io/projected/a139d0d8-6583-4fe1-b693-0a3162f84c9a-kube-api-access-s7d78\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399387 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" event={"ID":"a139d0d8-6583-4fe1-b693-0a3162f84c9a","Type":"ContainerDied","Data":"91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7"} Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399431 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91afb9bc81746a22360329a0c2e9c3578ef8fbe8cf8f47db2261a77cef8f47e7" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.399878 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493840-j8wmf" Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.463120 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 20:00:06 crc kubenswrapper[4985]: I0128 20:00:06.477547 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493795-qh4k7"] Jan 28 20:00:07 crc kubenswrapper[4985]: I0128 20:00:07.280625 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7f7054-2ff2-4045-aa35-4345b449dc70" path="/var/lib/kubelet/pods/dc7f7054-2ff2-4045-aa35-4345b449dc70/volumes" Jan 28 20:00:09 crc kubenswrapper[4985]: I0128 20:00:09.264551 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:09 crc kubenswrapper[4985]: E0128 20:00:09.265099 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.553793 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:14 crc kubenswrapper[4985]: E0128 20:00:14.554971 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.554988 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.555336 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a139d0d8-6583-4fe1-b693-0a3162f84c9a" containerName="collect-profiles" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.557557 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.557656 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684168 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684752 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.684805 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788630 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788703 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.788899 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.789739 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.790018 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.815518 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"community-operators-bq9kf\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:14 crc kubenswrapper[4985]: I0128 20:00:14.914693 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:15 crc kubenswrapper[4985]: I0128 20:00:15.453537 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:15 crc kubenswrapper[4985]: I0128 20:00:15.521241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"692e290ffd1bb0bf80c942964ddc2e19c3d4374e1f1bb6ba46b12a95e1c75bc8"} Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.534401 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb" exitCode=0 Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.534452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb"} Jan 28 20:00:16 crc kubenswrapper[4985]: I0128 20:00:16.537477 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:00:18 crc kubenswrapper[4985]: I0128 20:00:18.557878 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe"} Jan 28 20:00:20 crc kubenswrapper[4985]: I0128 20:00:20.597216 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe" exitCode=0 Jan 28 20:00:20 crc kubenswrapper[4985]: I0128 20:00:20.597285 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe"} Jan 28 20:00:21 crc kubenswrapper[4985]: I0128 20:00:21.275722 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:21 crc kubenswrapper[4985]: E0128 20:00:21.276474 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:22 crc kubenswrapper[4985]: I0128 20:00:22.621395 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerStarted","Data":"a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e"} Jan 28 20:00:22 crc kubenswrapper[4985]: I0128 20:00:22.644882 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bq9kf" podStartSLOduration=3.857729909 podStartE2EDuration="8.644862698s" podCreationTimestamp="2026-01-28 20:00:14 +0000 UTC" firstStartedPulling="2026-01-28 20:00:16.537268565 +0000 UTC m=+6427.363831386" lastFinishedPulling="2026-01-28 20:00:21.324401344 +0000 UTC m=+6432.150964175" observedRunningTime="2026-01-28 20:00:22.64002121 +0000 UTC m=+6433.466584051" watchObservedRunningTime="2026-01-28 20:00:22.644862698 +0000 UTC m=+6433.471425519" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.915652 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.916047 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:24 crc kubenswrapper[4985]: I0128 20:00:24.969529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:34 crc kubenswrapper[4985]: I0128 20:00:34.973384 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:35 crc kubenswrapper[4985]: I0128 20:00:35.028162 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:35 crc kubenswrapper[4985]: I0128 20:00:35.792151 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bq9kf" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" containerID="cri-o://a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" gracePeriod=2 Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.265005 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:36 crc kubenswrapper[4985]: E0128 20:00:36.265745 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.807669 4985 generic.go:334] "Generic (PLEG): container finished" podID="3bc390cd-8043-4c98-b7ce-c12170795362" containerID="a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" exitCode=0 Jan 28 20:00:36 crc kubenswrapper[4985]: I0128 20:00:36.807705 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e"} Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.024287 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194364 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194503 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.194766 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") pod \"3bc390cd-8043-4c98-b7ce-c12170795362\" (UID: \"3bc390cd-8043-4c98-b7ce-c12170795362\") " Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.196110 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities" (OuterVolumeSpecName: "utilities") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.209674 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x" (OuterVolumeSpecName: "kube-api-access-gxm4x") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "kube-api-access-gxm4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.259223 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3bc390cd-8043-4c98-b7ce-c12170795362" (UID: "3bc390cd-8043-4c98-b7ce-c12170795362"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297542 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gxm4x\" (UniqueName: \"kubernetes.io/projected/3bc390cd-8043-4c98-b7ce-c12170795362-kube-api-access-gxm4x\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297573 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.297582 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3bc390cd-8043-4c98-b7ce-c12170795362-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bq9kf" event={"ID":"3bc390cd-8043-4c98-b7ce-c12170795362","Type":"ContainerDied","Data":"692e290ffd1bb0bf80c942964ddc2e19c3d4374e1f1bb6ba46b12a95e1c75bc8"} Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822887 4985 scope.go:117] "RemoveContainer" containerID="a12e02f9a480b4c1e01983765be48bf37602ae67e23ecd56f0d62a1331d98c3e" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.822938 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bq9kf" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.855061 4985 scope.go:117] "RemoveContainer" containerID="274788a6ff58425f1ec3dc66cad627f3b9911ef7a411c12b374dd4064131c4fe" Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.857747 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.870321 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bq9kf"] Jan 28 20:00:37 crc kubenswrapper[4985]: I0128 20:00:37.883626 4985 scope.go:117] "RemoveContainer" containerID="fbcb4e57c66f42d19bfb4fb5f2f9a72f9458e83a1b7c389068e41fb01f3d54eb" Jan 28 20:00:39 crc kubenswrapper[4985]: I0128 20:00:39.280592 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" path="/var/lib/kubelet/pods/3bc390cd-8043-4c98-b7ce-c12170795362/volumes" Jan 28 20:00:43 crc kubenswrapper[4985]: I0128 20:00:43.872071 4985 scope.go:117] "RemoveContainer" containerID="338f8d06b8e77092f3ed49ded314fa263d3bc00689eede0c01a39e28fc35ddd0" Jan 28 20:00:49 crc kubenswrapper[4985]: I0128 20:00:49.264361 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:00:49 crc kubenswrapper[4985]: E0128 20:00:49.265295 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.529130 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530455 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-utilities" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530483 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-utilities" Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530522 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-content" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530533 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="extract-content" Jan 28 20:00:51 crc kubenswrapper[4985]: E0128 20:00:51.530557 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530568 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.530944 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bc390cd-8043-4c98-b7ce-c12170795362" containerName="registry-server" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.532238 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.535926 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hb5cc" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.536563 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.537719 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.540380 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.542639 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578412 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578467 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.578583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681158 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681213 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681304 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681328 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681349 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681376 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681397 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681429 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.681504 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.682114 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.683845 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.688828 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784187 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784261 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784290 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784313 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784349 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784464 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784934 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.784961 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.789035 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.789775 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.790672 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.811734 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.824682 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"tempest-tests-tempest\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " pod="openstack/tempest-tests-tempest" Jan 28 20:00:51 crc kubenswrapper[4985]: I0128 20:00:51.874701 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:00:52 crc kubenswrapper[4985]: W0128 20:00:52.392747 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda808dc72_a951_4f07_a612_2fde39a49a30.slice/crio-8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840 WatchSource:0}: Error finding container 8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840: Status 404 returned error can't find the container with id 8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840 Jan 28 20:00:52 crc kubenswrapper[4985]: I0128 20:00:52.392904 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Jan 28 20:00:53 crc kubenswrapper[4985]: I0128 20:00:53.020749 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerStarted","Data":"8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840"} Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.368068 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.372481 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.385374 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.571924 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.572345 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.572457 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674344 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674404 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.674510 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.675046 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.675067 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:54 crc kubenswrapper[4985]: I0128 20:00:54.708155 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"redhat-operators-spssk\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:55 crc kubenswrapper[4985]: I0128 20:00:55.001281 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:00:55 crc kubenswrapper[4985]: I0128 20:00:55.644660 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.066931 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1" exitCode=0 Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.067113 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1"} Jan 28 20:00:56 crc kubenswrapper[4985]: I0128 20:00:56.067208 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"28f0a59519c9b60c4ce3a2ff63447bff887c38b436a2ce97a8fb8d2c39a8e834"} Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.245854 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.249076 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.264277 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422138 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422434 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.422462 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.423383 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525753 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525909 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.525999 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.526025 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.532595 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.540640 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.541567 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.543480 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"keystone-cron-29493841-rkhj6\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:00 crc kubenswrapper[4985]: I0128 20:01:00.621401 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:01:04 crc kubenswrapper[4985]: I0128 20:01:04.264946 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:04 crc kubenswrapper[4985]: E0128 20:01:04.266021 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:08 crc kubenswrapper[4985]: W0128 20:01:08.503495 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc901d430_df5f_4afa_8a40_9ed18d2ad552.slice/crio-f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d WatchSource:0}: Error finding container f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d: Status 404 returned error can't find the container with id f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d Jan 28 20:01:08 crc kubenswrapper[4985]: I0128 20:01:08.508422 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29493841-rkhj6"] Jan 28 20:01:09 crc kubenswrapper[4985]: I0128 20:01:09.233682 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerStarted","Data":"f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d"} Jan 28 20:01:10 crc kubenswrapper[4985]: I0128 20:01:10.254228 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerStarted","Data":"add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96"} Jan 28 20:01:10 crc kubenswrapper[4985]: I0128 20:01:10.283542 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29493841-rkhj6" podStartSLOduration=10.283515959 podStartE2EDuration="10.283515959s" podCreationTimestamp="2026-01-28 20:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:01:10.279278818 +0000 UTC m=+6481.105841669" watchObservedRunningTime="2026-01-28 20:01:10.283515959 +0000 UTC m=+6481.110078810" Jan 28 20:01:16 crc kubenswrapper[4985]: I0128 20:01:16.263828 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:16 crc kubenswrapper[4985]: E0128 20:01:16.264634 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:31 crc kubenswrapper[4985]: I0128 20:01:31.277588 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:31 crc kubenswrapper[4985]: E0128 20:01:31.278440 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:43 crc kubenswrapper[4985]: I0128 20:01:43.688972 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.532560472s: [/var/lib/containers/storage/overlay/1c5d844420c9e6694b90098e23024dca450ee6c45edf1bee0c323f8999be7645/diff /var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:01:44 crc kubenswrapper[4985]: I0128 20:01:44.264883 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:44 crc kubenswrapper[4985]: E0128 20:01:44.266885 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:01:59 crc kubenswrapper[4985]: I0128 20:01:59.264848 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:01:59 crc kubenswrapper[4985]: E0128 20:01:59.265865 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:02 crc kubenswrapper[4985]: I0128 20:02:02.843773 4985 trace.go:236] Trace[2066331907]: "Calculate volume metrics of ca-trust-extracted for pod openshift-image-registry/image-registry-66df7c8f76-77p8r" (28-Jan-2026 20:02:01.238) (total time: 1481ms): Jan 28 20:02:02 crc kubenswrapper[4985]: Trace[2066331907]: [1.481882264s] [1.481882264s] END Jan 28 20:02:04 crc kubenswrapper[4985]: I0128 20:02:04.418822 4985 trace.go:236] Trace[22073964]: "Calculate volume metrics of catalog-content for pod openshift-marketplace/certified-operators-mclkd" (28-Jan-2026 20:02:02.408) (total time: 2010ms): Jan 28 20:02:04 crc kubenswrapper[4985]: Trace[22073964]: [2.010191273s] [2.010191273s] END Jan 28 20:02:04 crc kubenswrapper[4985]: I0128 20:02:04.501396 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 3.379233153s: [/var/lib/containers/storage/overlay/2b74aa33c03668223a87dd3c1ff4a84a09224e18713c6538d4c947dab78be4d8/diff /var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:02:07 crc kubenswrapper[4985]: I0128 20:02:07.031418 4985 generic.go:334] "Generic (PLEG): container finished" podID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerID="add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96" exitCode=0 Jan 28 20:02:07 crc kubenswrapper[4985]: I0128 20:02:07.031562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerDied","Data":"add1992ce6f5ead56094c5643c8729c313a9a2d5dd2d22b565d4688777afae96"} Jan 28 20:02:12 crc kubenswrapper[4985]: I0128 20:02:12.264857 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:12 crc kubenswrapper[4985]: E0128 20:02:12.266492 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.351294 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.354424 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391524 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391583 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.391788 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493652 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493802 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.493859 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.494343 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.496774 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.520643 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.545445 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"redhat-marketplace-h4kmr\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:13 crc kubenswrapper[4985]: I0128 20:02:13.732686 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.790060 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.863787 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.863866 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.864095 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.864268 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") pod \"c901d430-df5f-4afa-8a40-9ed18d2ad552\" (UID: \"c901d430-df5f-4afa-8a40-9ed18d2ad552\") " Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.881868 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg" (OuterVolumeSpecName: "kube-api-access-zfbbg") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "kube-api-access-zfbbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.882381 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.915969 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.942116 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data" (OuterVolumeSpecName: "config-data") pod "c901d430-df5f-4afa-8a40-9ed18d2ad552" (UID: "c901d430-df5f-4afa-8a40-9ed18d2ad552"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.966981 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967023 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfbbg\" (UniqueName: \"kubernetes.io/projected/c901d430-df5f-4afa-8a40-9ed18d2ad552-kube-api-access-zfbbg\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967035 4985 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: I0128 20:02:22.967043 4985 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/c901d430-df5f-4afa-8a40-9ed18d2ad552-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 20:02:22 crc kubenswrapper[4985]: E0128 20:02:22.997779 4985 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.002886 4985 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-f5tss,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a808dc72-a951-4f07-a612-2fde39a49a30): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.004913 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259108 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29493841-rkhj6" event={"ID":"c901d430-df5f-4afa-8a40-9ed18d2ad552","Type":"ContainerDied","Data":"f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d"} Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0418f93411fb9dc138c97d2c50934d37228bbc243645ed6f96e4e8ee69e3b1d" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.259154 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29493841-rkhj6" Jan 28 20:02:23 crc kubenswrapper[4985]: E0128 20:02:23.271577 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" Jan 28 20:02:23 crc kubenswrapper[4985]: I0128 20:02:23.584342 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:02:23 crc kubenswrapper[4985]: W0128 20:02:23.590556 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode90a8845_3321_45ae_8c9d_524afa36cdd7.slice/crio-7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06 WatchSource:0}: Error finding container 7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06: Status 404 returned error can't find the container with id 7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06 Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.273627 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44"} Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276171 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359" exitCode=0 Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276218 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359"} Jan 28 20:02:24 crc kubenswrapper[4985]: I0128 20:02:24.276401 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06"} Jan 28 20:02:26 crc kubenswrapper[4985]: I0128 20:02:26.264200 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:26 crc kubenswrapper[4985]: E0128 20:02:26.265275 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:30 crc kubenswrapper[4985]: I0128 20:02:30.352141 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426"} Jan 28 20:02:35 crc kubenswrapper[4985]: I0128 20:02:35.701498 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:02:35 crc kubenswrapper[4985]: I0128 20:02:35.701901 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:02:37 crc kubenswrapper[4985]: I0128 20:02:37.004781 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:02:38 crc kubenswrapper[4985]: I0128 20:02:38.263873 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:38 crc kubenswrapper[4985]: E0128 20:02:38.264504 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:42 crc kubenswrapper[4985]: I0128 20:02:42.921670 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426" exitCode=0 Jan 28 20:02:42 crc kubenswrapper[4985]: I0128 20:02:42.921793 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426"} Jan 28 20:02:43 crc kubenswrapper[4985]: I0128 20:02:43.118723 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Jan 28 20:02:44 crc kubenswrapper[4985]: I0128 20:02:44.944899 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerStarted","Data":"5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599"} Jan 28 20:02:44 crc kubenswrapper[4985]: I0128 20:02:44.986645 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-h4kmr" podStartSLOduration=12.541055053000001 podStartE2EDuration="31.986608699s" podCreationTimestamp="2026-01-28 20:02:13 +0000 UTC" firstStartedPulling="2026-01-28 20:02:24.279168666 +0000 UTC m=+6555.105731497" lastFinishedPulling="2026-01-28 20:02:43.724722322 +0000 UTC m=+6574.551285143" observedRunningTime="2026-01-28 20:02:44.96268298 +0000 UTC m=+6575.789245801" watchObservedRunningTime="2026-01-28 20:02:44.986608699 +0000 UTC m=+6575.813171520" Jan 28 20:02:46 crc kubenswrapper[4985]: I0128 20:02:46.976820 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerStarted","Data":"ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e"} Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.008126 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=6.288863188 podStartE2EDuration="1m57.008098843s" podCreationTimestamp="2026-01-28 20:00:50 +0000 UTC" firstStartedPulling="2026-01-28 20:00:52.395828508 +0000 UTC m=+6463.222391349" lastFinishedPulling="2026-01-28 20:02:43.115064183 +0000 UTC m=+6573.941627004" observedRunningTime="2026-01-28 20:02:46.992633124 +0000 UTC m=+6577.819195945" watchObservedRunningTime="2026-01-28 20:02:47.008098843 +0000 UTC m=+6577.834661694" Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.989394 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44" exitCode=0 Jan 28 20:02:47 crc kubenswrapper[4985]: I0128 20:02:47.989464 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44"} Jan 28 20:02:50 crc kubenswrapper[4985]: I0128 20:02:50.023145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} Jan 28 20:02:50 crc kubenswrapper[4985]: I0128 20:02:50.060079 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spssk" podStartSLOduration=11.175441512 podStartE2EDuration="1m56.060054744s" podCreationTimestamp="2026-01-28 20:00:54 +0000 UTC" firstStartedPulling="2026-01-28 20:01:04.06274246 +0000 UTC m=+6474.889305301" lastFinishedPulling="2026-01-28 20:02:48.947355702 +0000 UTC m=+6579.773918533" observedRunningTime="2026-01-28 20:02:50.056536044 +0000 UTC m=+6580.883098865" watchObservedRunningTime="2026-01-28 20:02:50.060054744 +0000 UTC m=+6580.886617575" Jan 28 20:02:51 crc kubenswrapper[4985]: I0128 20:02:51.276351 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:02:51 crc kubenswrapper[4985]: E0128 20:02:51.276889 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:02:53 crc kubenswrapper[4985]: I0128 20:02:53.732816 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:53 crc kubenswrapper[4985]: I0128 20:02:53.733449 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:02:54 crc kubenswrapper[4985]: I0128 20:02:54.787294 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" probeResult="failure" output=< Jan 28 20:02:54 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:02:54 crc kubenswrapper[4985]: > Jan 28 20:02:55 crc kubenswrapper[4985]: I0128 20:02:55.002542 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:02:55 crc kubenswrapper[4985]: I0128 20:02:55.002855 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:02:56 crc kubenswrapper[4985]: I0128 20:02:56.054505 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:02:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:02:56 crc kubenswrapper[4985]: > Jan 28 20:03:04 crc kubenswrapper[4985]: I0128 20:03:04.784886 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:04 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:04 crc kubenswrapper[4985]: > Jan 28 20:03:06 crc kubenswrapper[4985]: I0128 20:03:06.051789 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:06 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:06 crc kubenswrapper[4985]: > Jan 28 20:03:06 crc kubenswrapper[4985]: I0128 20:03:06.264839 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:06 crc kubenswrapper[4985]: E0128 20:03:06.265545 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:13 crc kubenswrapper[4985]: I0128 20:03:13.798114 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:13 crc kubenswrapper[4985]: I0128 20:03:13.863024 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:14 crc kubenswrapper[4985]: I0128 20:03:14.042349 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:15 crc kubenswrapper[4985]: I0128 20:03:15.353738 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-h4kmr" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" containerID="cri-o://5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" gracePeriod=2 Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.074474 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:17 crc kubenswrapper[4985]: > Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356596 4985 generic.go:334] "Generic (PLEG): container finished" podID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerID="5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" exitCode=0 Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356635 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599"} Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356660 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-h4kmr" event={"ID":"e90a8845-3321-45ae-8c9d-524afa36cdd7","Type":"ContainerDied","Data":"7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06"} Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.356671 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e51ef6d76839376c24d7507a45b3c60c636dc46cf99e59655b204bbb908ed06" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.402012 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491639 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491785 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.491925 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") pod \"e90a8845-3321-45ae-8c9d-524afa36cdd7\" (UID: \"e90a8845-3321-45ae-8c9d-524afa36cdd7\") " Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.502040 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities" (OuterVolumeSpecName: "utilities") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.524036 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.529750 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn" (OuterVolumeSpecName: "kube-api-access-8fzzn") pod "e90a8845-3321-45ae-8c9d-524afa36cdd7" (UID: "e90a8845-3321-45ae-8c9d-524afa36cdd7"). InnerVolumeSpecName "kube-api-access-8fzzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594840 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594874 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e90a8845-3321-45ae-8c9d-524afa36cdd7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:16.594887 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fzzn\" (UniqueName: \"kubernetes.io/projected/e90a8845-3321-45ae-8c9d-524afa36cdd7-kube-api-access-8fzzn\") on node \"crc\" DevicePath \"\"" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.366929 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-h4kmr" Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.404990 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:17 crc kubenswrapper[4985]: I0128 20:03:17.416843 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-h4kmr"] Jan 28 20:03:19 crc kubenswrapper[4985]: I0128 20:03:19.283158 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" path="/var/lib/kubelet/pods/e90a8845-3321-45ae-8c9d-524afa36cdd7/volumes" Jan 28 20:03:21 crc kubenswrapper[4985]: I0128 20:03:21.276150 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:21 crc kubenswrapper[4985]: E0128 20:03:21.276970 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:26 crc kubenswrapper[4985]: I0128 20:03:26.057230 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:26 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:26 crc kubenswrapper[4985]: > Jan 28 20:03:34 crc kubenswrapper[4985]: I0128 20:03:34.264338 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:34 crc kubenswrapper[4985]: E0128 20:03:34.265078 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:36 crc kubenswrapper[4985]: I0128 20:03:36.154876 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:36 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:36 crc kubenswrapper[4985]: > Jan 28 20:03:46 crc kubenswrapper[4985]: I0128 20:03:46.102068 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:46 crc kubenswrapper[4985]: > Jan 28 20:03:46 crc kubenswrapper[4985]: I0128 20:03:46.273111 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:46 crc kubenswrapper[4985]: E0128 20:03:46.274545 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:54 crc kubenswrapper[4985]: I0128 20:03:54.900278 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:54 crc kubenswrapper[4985]: I0128 20:03:54.938634 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:55 crc kubenswrapper[4985]: I0128 20:03:55.107071 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:55 crc kubenswrapper[4985]: I0128 20:03:55.107141 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:56 crc kubenswrapper[4985]: I0128 20:03:56.659686 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:03:56 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:03:56 crc kubenswrapper[4985]: > Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.329539 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.329879 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.546571 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.591475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:57 crc kubenswrapper[4985]: I0128 20:03:57.771442 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105475 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105535 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105572 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.105659 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219554 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219833 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219558 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.219906 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.260519 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.314557 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.350438 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:58 crc kubenswrapper[4985]: E0128 20:03:58.385312 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:03:58 crc kubenswrapper[4985]: I0128 20:03:58.833964 4985 trace.go:236] Trace[252571599]: "Calculate volume metrics of persistence for pod openstack/rabbitmq-server-2" (28-Jan-2026 20:03:56.440) (total time: 2359ms): Jan 28 20:03:58 crc kubenswrapper[4985]: Trace[252571599]: [2.359531652s] [2.359531652s] END Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.014231 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.057428 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.233453 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.233504 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.281692 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.702525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.702540 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737737 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737786 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737815 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.737847 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.745161 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.849834 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:03:59 crc kubenswrapper[4985]: I0128 20:03:59.849924 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039280 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039392 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039308 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.039526 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107281 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107359 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107268 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.107478 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.340458 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.340646 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.464396 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547440 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547724 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.547762 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.548083 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.639755 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.639827 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.640437 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:00 crc kubenswrapper[4985]: I0128 20:04:00.640457 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005550 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005550 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005861 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.005933 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.167430 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.167628 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328730 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328779 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328837 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.328786 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.365175 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.365233 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.366690 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.366721 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373021 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.76:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373094 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373120 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.373177 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582455 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582766 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582619 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.582885 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697438 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697510 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697587 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697613 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697449 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697652 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697760 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.697815 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.815168 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:01 crc kubenswrapper[4985]: I0128 20:04:01.815258 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.046645 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.046781 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.190579 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.190664 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.731110 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.731437 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.732824 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:02 crc kubenswrapper[4985]: I0128 20:04:02.732832 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.331272 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.331669 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.494347 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.494425 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.727070 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.727128 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.736082 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.742820 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.742939 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:03 crc kubenswrapper[4985]: I0128 20:04:03.743834 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.015462 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.016558 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409492 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409621 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409949 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.409974 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.497501 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.497577 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.631111 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.631166 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.734869 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735214 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735473 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.735659 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.767111 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": context deadline exceeded" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.767186 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": context deadline exceeded" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.768125 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.768149 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.794554 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.794612 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.879871 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.879941 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.963559 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:04 crc kubenswrapper[4985]: I0128 20:04:04.963612 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171742 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171803 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": context deadline exceeded" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171875 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.171893 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.172130 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.172179 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.174996 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175045 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175188 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.175209 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547574 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547637 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547673 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.547738 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.672715 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.672787 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.735518 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.793693 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.793793 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.818531 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.818624 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.861246 4985 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.861327 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ac72f54d-936d-4c98-9f91-918f7a05b5d1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.878791 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.878884 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.935401 4985 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.935470 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="664a7afe-25ae-45f8-81bd-9a9c59c431cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.963856 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:05 crc kubenswrapper[4985]: I0128 20:04:05.963914 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038347 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038403 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038369 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.038717 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/live\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.107621 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.107695 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/live\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.108319 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Liveness probe status=failure output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.108336 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/live\": context deadline exceeded" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.716693 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.717084 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.716743 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.717191 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:06 crc kubenswrapper[4985]: I0128 20:04:06.990649 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.072528 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155495 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155546 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.155910 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.197538 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.197573 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.280470 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.280913 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.436430 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.436481 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.437104 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.519498 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.519598 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.525631 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602596 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602724 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602770 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.602624 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.767651 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850455 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850535 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850745 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850831 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.850869 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.933655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.934138 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:07 crc kubenswrapper[4985]: I0128 20:04:07.934184 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.043410 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.091562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132431 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132469 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132507 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132515 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.132431 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.138717 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.139054 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.138772 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.139298 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.226739 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.235985 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.251820 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" containerID="cri-o://9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48" gracePeriod=30 Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.267472 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.309515 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.309625 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.473861 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.494172 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.494443 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555562 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555637 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555663 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555699 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555495 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.555881 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.735272 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.735297 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.956292 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.956378 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:08 crc kubenswrapper[4985]: I0128 20:04:08.997544 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.192484 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.283593 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703490 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703535 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.703545 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737328 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737400 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737489 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.737403 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.849870 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:09 crc kubenswrapper[4985]: I0128 20:04:09.850171 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038001 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038068 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038139 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.038076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106584 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106626 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106666 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.106690 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.299525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.544467 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.544530 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626489 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626507 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626589 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626616 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.626534 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.708990 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709344 4985 patch_prober.go:28] interesting pod/downloads-7954f5f757-hpz9q container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.709404 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-hpz9q" podUID="25061ce4-ca31-4da7-ad36-c6535e1d2028" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.8:8080/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.711466 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.711511 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.734462 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.735411 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924363 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924424 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924580 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": context deadline exceeded" start-of-body= Jan 28 20:04:10 crc kubenswrapper[4985]: I0128 20:04:10.924735 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": context deadline exceeded" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.168516 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.168631 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328010 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328072 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.328527 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.329051 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.373338 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.373404 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630420 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630443 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630476 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630505 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630547 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630617 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630627 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.630692 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712393 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712431 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712454 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712488 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712511 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712558 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712569 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.712590 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.735795 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.814845 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.814913 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:11 crc kubenswrapper[4985]: I0128 20:04:11.869630 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.052424 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.112454 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.196444 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.196541 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.237454 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.237536 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.248936 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.286232 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:12 crc kubenswrapper[4985]: E0128 20:04:12.288864 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.365455 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.365525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.731224 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.731741 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736327 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736424 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:12 crc kubenswrapper[4985]: I0128 20:04:12.736572 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.332890 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Liveness probe status=failure output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.333061 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373492 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373497 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.373541 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501484 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501692 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.501809 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.679458 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.731415 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733167 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733707 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733879 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.733962 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.997006 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:13 crc kubenswrapper[4985]: I0128 20:04:13.997054 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.236326 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.236397 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.348673 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.348750 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.502408 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.502483 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.628069 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.628163 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.733527 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podUID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerName="nbdb" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735341 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-t7xb2" podUID="5eaf2e7f-83ab-438b-8de3-75886a97ada4" containerName="sbdb" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735399 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735609 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735801 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735803 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.735939 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.736236 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768649 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768714 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.768775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.771732 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="metrics-server" containerStatusID={"Type":"cri-o","ID":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" containerMessage="Container metrics-server failed liveness probe, will be restarted" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.774296 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" containerID="cri-o://7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d" gracePeriod=170 Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.795128 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.795196 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.879280 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.879355 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.963012 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:14 crc kubenswrapper[4985]: I0128 20:04:14.963076 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.051701 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.052207 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.051738 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.052331 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106463 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106532 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.106620 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107899 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107972 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.107909 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.108080 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.364930 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.364995 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545725 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545807 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545742 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.545865 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715440 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715502 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715440 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.715550 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.753432 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.756438 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="Get \"http://10.217.0.43:9898/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.819082 4985 patch_prober.go:28] interesting pod/logging-loki-ingester-0 container/loki-ingester namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.819141 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-ingester-0" podUID="e322915e-933c-4de4-98dd-ef047ee5b056" containerName="loki-ingester" probeResult="failure" output="Get \"https://10.217.0.57:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.860970 4985 patch_prober.go:28] interesting pod/logging-loki-compactor-0 container/loki-compactor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.861031 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-compactor-0" podUID="ac72f54d-936d-4c98-9f91-918f7a05b5d1" containerName="loki-compactor" probeResult="failure" output="Get \"https://10.217.0.58:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.932894 4985 patch_prober.go:28] interesting pod/logging-loki-index-gateway-0 container/loki-index-gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:15 crc kubenswrapper[4985]: I0128 20:04:15.933237 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-index-gateway-0" podUID="664a7afe-25ae-45f8-81bd-9a9c59c431cd" containerName="loki-index-gateway" probeResult="failure" output="Get \"https://10.217.0.60:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.108519 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.108945 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.372397 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.372487 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.503268 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716466 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716499 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716553 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.716574 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.963469 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:16 crc kubenswrapper[4985]: I0128 20:04:16.963559 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.005540 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.005598 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.163593 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.164310 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/designate-operator-controller-manager-b45d7bf98-75d84" podUID="4dfb4621-d061-4224-8aee-840726565aa3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.104:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.204416 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/glance-operator-controller-manager-78fdd796fd-6bdmh" podUID="99893bb5-33ef-4159-bf8f-1c79a58e74d9" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.105:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.204426 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.246726 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" podUID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.107:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.252345 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.252443 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.254360 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="hostpath-provisioner" containerStatusID={"Type":"cri-o","ID":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} pod="hostpath-provisioner/csi-hostpathplugin-5zj27" containerMessage="Container hostpath-provisioner failed liveness probe, will be restarted" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.258607 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" podUID="99828525-9397-448d-9a51-bc0da88038ac" containerName="hostpath-provisioner" containerID="cri-o://eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409" gracePeriod=30 Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372461 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372514 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372537 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.372581 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414484 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414568 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.414680 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.416107 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-api-0" podUID="841350c5-b9e8-4331-9282-e129f8152153" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.209:8776/healthcheck\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.498538 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/manila-operator-controller-manager-78c6999f6f-9lm5f" podUID="654a2c56-81a7-4b32-ad1d-c4d60b054b47" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.111:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.539569 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" podUID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.110:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.539846 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ironic-operator-controller-manager-598f7747c9-s2n6z" podUID="75e682e9-e5a5-47f1-83cc-c8004ebe224a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.109:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.573078 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" start-of-body= Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.573186 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": context deadline exceeded" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.621554 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.621680 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662528 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662546 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" podUID="367b6525-0367-437a-9fe3-b2007411f4af" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.116:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.662959 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.734319 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podUID="45d84233-dc44-4b3c-8aaa-f08ab50c0512" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.734319 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.735890 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/heat-engine-5df4f6c8f9-fvvqb" podUID="45d84233-dc44-4b3c-8aaa-f08ab50c0512" containerName="heat-engine" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.736008 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.736057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/ceilometer-0" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.741672 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="ceilometer-central-agent" containerStatusID={"Type":"cri-o","ID":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} pod="openstack/ceilometer-0" containerMessage="Container ceilometer-central-agent failed liveness probe, will be restarted" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.741789 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-central-agent" containerID="cri-o://c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b" gracePeriod=30 Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.772324 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.772471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:17 crc kubenswrapper[4985]: I0128 20:04:17.935641 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-74779d9b4-2xxwx" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.092513 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105400 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105473 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105423 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105534 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.105563 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.126864 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller-manager" containerStatusID={"Type":"cri-o","ID":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" containerMessage="Container controller-manager failed liveness probe, will be restarted" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.126936 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" containerID="cri-o://03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c" gracePeriod=30 Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137269 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Liveness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137329 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137830 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.137896 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.145032 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="route-controller-manager" containerStatusID={"Type":"cri-o","ID":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" containerMessage="Container route-controller-manager failed liveness probe, will be restarted" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.145097 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" containerID="cri-o://4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea" gracePeriod=30 Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.185443 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.185590 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.226517 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/swift-operator-controller-manager-547cbdb99f-9kbdr" podUID="c95374e8-7d41-4a49-add9-7f28196d70eb" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.119:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.334495 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" podUID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.120:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.375490 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.375666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.416713 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" podUID="d4d6e990-839d-4186-9382-1a67922556df" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.122:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440689 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440771 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.440852 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.494527 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.664500 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.705448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" podUID="9897766d-6497-4d0e-bd9a-ef8e31a08e24" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.112:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735046 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735400 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.735645 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.737678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovsdb-server" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.751522 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-ovs-f287q" podUID="2c181f14-26b7-49f4-9ae0-869d9b291938" containerName="ovs-vswitchd" probeResult="failure" output="command timed out" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.815420 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" podUID="9c7284ab-b40f-4275-b85e-77aebd660135" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.114:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.955059 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.17:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.955445 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.957280 4985 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-v2hv6 container/oauth-apiserver namespace/openshift-oauth-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:18 crc kubenswrapper[4985]: I0128 20:04:18.957346 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-v2hv6" podUID="c731b198-314f-46a9-ad13-a4cc6c7bab94" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.17:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039479 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039583 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039594 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.039698 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.233528 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.233683 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.234323 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.235018 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Liveness probe status=failure output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.235052 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.281398 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.361116 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.447240 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" podUID="1310770f-7cb7-4874-b2a0-4ef733911716" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.121:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.504290 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703434 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703529 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.703630 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.733157 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="webhook-server" containerStatusID={"Type":"cri-o","ID":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" containerMessage="Container webhook-server failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.733240 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" containerID="cri-o://fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae" gracePeriod=2 Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737007 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737067 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737065 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737185 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737269 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.737373 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.757218 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="prometheus-operator-admission-webhook" containerStatusID={"Type":"cri-o","ID":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" containerMessage="Container prometheus-operator-admission-webhook failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.757316 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" containerID="cri-o://555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47" gracePeriod=30 Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849593 4985 patch_prober.go:28] interesting pod/authentication-operator-69f744f599-pcb4d container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849655 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.849701 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.861236 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Jan 28 20:04:19 crc kubenswrapper[4985]: I0128 20:04:19.861488 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" podUID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerName="authentication-operator" containerID="cri-o://9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b" gracePeriod=30 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037777 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037852 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.55:8081/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037802 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.037910 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.081548 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" podUID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.108:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.106956 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107027 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107039 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/gateway namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.107096 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.56:8081/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.382595 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.383115 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.383224 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.385200 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629556 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" probeResult="failure" output="Get \"http://127.0.0.1:7573/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629599 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629892 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.629937 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631802 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.631841 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.633113 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} pod="metallb-system/frr-k8s-qlsnv" containerMessage="Container controller failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.633164 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr" containerStatusID={"Type":"cri-o","ID":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} pod="metallb-system/frr-k8s-qlsnv" containerMessage="Container frr failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.661376 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" containerID="cri-o://a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1" gracePeriod=2 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711485 4985 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-77p8r container/registry namespace/openshift-image-registry: Liveness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711562 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podUID="69277fd0-66c2-4094-87fd-eaa80e756e75" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711582 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711636 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711674 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.711705 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712437 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712464 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712489 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712571 4985 patch_prober.go:28] interesting pod/image-registry-66df7c8f76-77p8r container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.712618 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66df7c8f76-77p8r" podUID="69277fd0-66c2-4094-87fd-eaa80e756e75" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.72:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.728685 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="frr-k8s-webhook-server" containerStatusID={"Type":"cri-o","ID":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" containerMessage="Container frr-k8s-webhook-server failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.728753 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" containerID="cri-o://35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5" gracePeriod=10 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733487 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="marketplace-operator" containerStatusID={"Type":"cri-o","ID":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" containerMessage="Container marketplace-operator failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733547 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" containerID="cri-o://dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089" gracePeriod=30 Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733620 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-handler-gkjzc" podUID="8f0319d2-9602-42b4-a3fb-c53bf5d3c244" containerName="nmstate-handler" probeResult="failure" output="command timed out" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.733773 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.740475 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753390 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753475 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753593 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753406 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" podUID="57ef54a5-9891-4f69-9907-b726d30d4006" containerName="webhook-server" probeResult="failure" output="Get \"http://10.217.0.95:7472/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753806 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.753831 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937201 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937278 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.937354 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938282 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938327 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.938368 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.952149 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console-operator" containerStatusID={"Type":"cri-o","ID":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} pod="openshift-console-operator/console-operator-58897d9998-j6799" containerMessage="Container console-operator failed liveness probe, will be restarted" Jan 28 20:04:20 crc kubenswrapper[4985]: I0128 20:04:20.952230 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" containerID="cri-o://08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167331 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167384 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167436 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.167516 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.171768 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="controller" containerStatusID={"Type":"cri-o","ID":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} pod="metallb-system/controller-6968d8fdc4-8f79k" containerMessage="Container controller failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.172060 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" containerID="cri-o://32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8" gracePeriod=2 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.336924 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.336977 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337015 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337214 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337288 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.337528 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.338467 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="olm-operator" containerStatusID={"Type":"cri-o","ID":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" containerMessage="Container olm-operator failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.338509 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" containerID="cri-o://f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.364879 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.365107 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.373329 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.373387 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.467530 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" podUID="c1e8524e-e047-4872-9ee1-ae4e013f8825" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.123:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.620562 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-gkjzc" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631219 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631332 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.631464 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632300 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632366 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632429 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632448 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632482 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632516 4985 patch_prober.go:28] interesting pod/package-server-manager-789f6589d5-pdwpf container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.632536 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-pdwpf" podUID="893bf4c0-7b07-4e49-bff4-9ed7d52b3196" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.22:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.636932 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="packageserver" containerStatusID={"Type":"cri-o","ID":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" containerMessage="Container packageserver failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.636989 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" containerID="cri-o://6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714447 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Liveness probe status=failure output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714523 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.714587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.724861 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="router" containerStatusID={"Type":"cri-o","ID":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} pod="openshift-ingress/router-default-5444994796-qnrsp" containerMessage="Container router failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.724930 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" containerID="cri-o://8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a" gracePeriod=10 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.732479 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756447 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756502 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756558 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756584 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756652 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756757 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756619 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756797 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.756862 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.758329 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="catalog-operator" containerStatusID={"Type":"cri-o","ID":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" containerMessage="Container catalog-operator failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.758372 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" containerID="cri-o://d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.798465 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815515 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815580 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.815660 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.842196 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-scheduler" containerStatusID={"Type":"cri-o","ID":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" containerMessage="Container kube-scheduler failed liveness probe, will be restarted" Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.842303 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" containerID="cri-o://7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c" gracePeriod=30 Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.938776 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:21 crc kubenswrapper[4985]: I0128 20:04:21.938843 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047414 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047499 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047443 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.047658 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.049620 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="speaker" containerStatusID={"Type":"cri-o","ID":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} pod="metallb-system/speaker-6lq6d" containerMessage="Container speaker failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.049769 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" containerID="cri-o://7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089" gracePeriod=2 Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192460 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/livez\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.192570 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.208245 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cert-manager-webhook" containerStatusID={"Type":"cri-o","ID":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" containerMessage="Container cert-manager-webhook failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.208337 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" containerID="cri-o://efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093" gracePeriod=30 Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.233489 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/controller-6968d8fdc4-8f79k" podUID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerName="controller" probeResult="failure" output="Get \"http://10.217.0.97:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.289006 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.289161 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.634337 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.634412 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674124 4985 trace.go:236] Trace[1202791139]: "Calculate volume metrics of swift for pod openstack/swift-storage-0" (28-Jan-2026 20:04:14.471) (total time: 8182ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1202791139]: [8.182339713s] [8.182339713s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674144 4985 trace.go:236] Trace[830585391]: "Calculate volume metrics of prometheus-metric-storage-db for pod openstack/prometheus-metric-storage-0" (28-Jan-2026 20:04:13.772) (total time: 8882ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[830585391]: [8.882286216s] [8.882286216s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674128 4985 trace.go:236] Trace[511881909]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-ingester-0" (28-Jan-2026 20:04:12.851) (total time: 9817ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[511881909]: [9.817961972s] [9.817961972s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674144 4985 trace.go:236] Trace[1249959105]: "Calculate volume metrics of ovndbcluster-nb-etc-ovn for pod openstack/ovsdbserver-nb-0" (28-Jan-2026 20:04:20.743) (total time: 1910ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1249959105]: [1.910528215s] [1.910528215s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.674119 4985 trace.go:236] Trace[1597181343]: "Calculate volume metrics of storage for pod openshift-logging/logging-loki-index-gateway-0" (28-Jan-2026 20:04:19.471) (total time: 3183ms): Jan 28 20:04:22 crc kubenswrapper[4985]: Trace[1597181343]: [3.183068984s] [3.183068984s] END Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731002 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731542 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731571 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.731654 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.732512 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.735615 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.735822 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.737289 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.745173 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} pod="openstack/openstack-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.750088 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-operators-5whpv" podUID="5cad9e98-172d-4053-83a3-ebee724a6d9c" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.757282 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:22 crc kubenswrapper[4985]: I0128 20:04:22.757340 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.089439 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.234449 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359576 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359852 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.359985 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.493957 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/healthy\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.493994 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/prometheus-metric-storage-0" podUID="3d356801-0ed0-4343-87a9-29d23453d621" containerName="prometheus" probeResult="failure" output="Get \"https://10.217.0.178:9090/-/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.678742 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720450 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": read tcp 10.217.0.2:59058->10.217.0.48:8081: read: connection reset by peer" start-of-body= Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720492 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" podUID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.115:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720529 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": read tcp 10.217.0.2:59058->10.217.0.48:8081: read: connection reset by peer" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.720635 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.771736 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.771922 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.773948 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.775400 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.775519 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.788406 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="galera" containerStatusID={"Type":"cri-o","ID":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} pod="openstack/openstack-cell1-galera-0" containerMessage="Container galera failed liveness probe, will be restarted" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.880656 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.882076 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="a4a0bf327889a8b202f093668303cbe6c4dcf67ff2cf6693d3a23fd9a88737e1" exitCode=137 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.888037 4985 generic.go:334] "Generic (PLEG): container finished" podID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerID="555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47" exitCode=0 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.888145 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerDied","Data":"555b2897b605937380ab9cdf98df1b3029b5fd9c1370b8b411db0cd55c5d3b47"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.890663 4985 generic.go:334] "Generic (PLEG): container finished" podID="57ef54a5-9891-4f69-9907-b726d30d4006" containerID="fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae" exitCode=137 Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.890717 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerDied","Data":"fdd72e77cc726ca0a1a4cf7375eda691bbda1220dee69172ff1e5101d96bbeae"} Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.895557 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.998202 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8080/livez\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:23 crc kubenswrapper[4985]: I0128 20:04:23.998316 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/kube-state-metrics-0" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.034559 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-state-metrics" containerStatusID={"Type":"cri-o","ID":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} pod="openstack/kube-state-metrics-0" containerMessage="Container kube-state-metrics failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.034640 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" containerID="cri-o://dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.214247 4985 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Liveness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.214352 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.216034 4985 patch_prober.go:28] interesting pod/etcd-crc container/etcd namespace/openshift-etcd: Readiness probe status=failure output="HTTP probe failed with statuscode: 503" start-of-body=failed to establish etcd client: giving up getting a cached client after 3 tries Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.216108 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-etcd/etcd-crc" podUID="2139d3e2895fc6797b9c76a1b4c9886d" containerName="etcd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.237511 4985 patch_prober.go:28] interesting pod/apiserver-76f77b778f-2wxf2 container/openshift-apiserver namespace/openshift-apiserver: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.237595 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-apiserver/apiserver-76f77b778f-2wxf2" podUID="ebf5f82e-2a14-49d9-b670-59ed73e71203" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.30:8443/readyz?exclude=etcd&exclude=etcd-readiness\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.240531 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="frr" containerID="cri-o://4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8" gracePeriod=2 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.349354 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.349434 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.365463 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.366133 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498035 4985 patch_prober.go:28] interesting pod/openshift-kube-scheduler-crc container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498105 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podUID="3dcd261975c3d6b9a6ad6367fd4facd3" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.126.11:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.498188 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627656 4985 patch_prober.go:28] interesting pod/nmstate-webhook-8474b5b9d8-jrf9w container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627787 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" podUID="645ec0ef-97a6-4e2f-b691-ffcbcab4eed7" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.88:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.627899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.732086 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.733845 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.733909 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="44d6c73a-69d8-46fe-82f7-85b0b4fcdfe9" containerName="prometheus" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.734440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.734614 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.735057 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736147 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736222 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736390 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736525 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736719 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output="command timed out" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736732 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.736996 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.746281 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} pod="openshift-marketplace/redhat-marketplace-4fx27" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.746352 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" containerID="cri-o://f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.749297 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} pod="openstack-operators/openstack-operator-index-wnjfp" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.749370 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" containerID="cri-o://a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.751500 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} pod="openshift-marketplace/community-operators-z2xq5" containerMessage="Container registry-server failed liveness probe, will be restarted" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.751705 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" containerID="cri-o://acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" gracePeriod=30 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.767307 4985 patch_prober.go:28] interesting pod/metrics-server-6845d579bb-9lznf container/metrics-server namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.767364 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" podUID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerName="metrics-server" probeResult="failure" output="Get \"https://10.217.0.78:10250/livez\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.793874 4985 patch_prober.go:28] interesting pod/logging-loki-querier-76788598db-dkn9m container/loki-querier namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.793944 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" podUID="21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7" containerName="loki-querier" probeResult="failure" output="Get \"https://10.217.0.53:3101/ready\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.794051 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879052 4985 patch_prober.go:28] interesting pod/logging-loki-query-frontend-69d9546745-pcd6x container/loki-query-frontend namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879678 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" podUID="5c56d4fe-62c7-47ef-9a0f-607d899d19b8" containerName="loki-query-frontend" probeResult="failure" output="Get \"https://10.217.0.54:3101/loki/api/v1/status/buildinfo\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.879863 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.906721 4985 generic.go:334] "Generic (PLEG): container finished" podID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerID="dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908" exitCode=2 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.906809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerDied","Data":"dc0252c56541e6e97a4f6129007afca9a4dd9402da5c84c55d3d31fd8c345908"} Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.911801 4985 generic.go:334] "Generic (PLEG): container finished" podID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerID="4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8" exitCode=143 Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.911862 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerDied","Data":"4f6591d0d275d0078b49f74da8009d5d995a9740fb3846677a55a9876831fac8"} Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963724 4985 patch_prober.go:28] interesting pod/logging-loki-distributor-5f678c8dd6-2755m container/loki-distributor namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963789 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" podUID="effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb" containerName="loki-distributor" probeResult="failure" output="Get \"https://10.217.0.52:3101/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:24 crc kubenswrapper[4985]: I0128 20:04:24.963870 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.984163 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerDied","Data":"32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8"} Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.984172 4985 generic.go:334] "Generic (PLEG): container finished" podID="5fd77adb-e801-4d3f-ac61-64615952aebd" containerID="32a03f53581016e8458cfcf2986dfe26e5246f2793c884a5203a887cdeefb6c8" exitCode=137 Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.991837 4985 generic.go:334] "Generic (PLEG): container finished" podID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerID="e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f" exitCode=1 Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:24.991950 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerDied","Data":"e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f"} Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.038846 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-g5tqr container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.038935 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-g5tqr" podUID="ae6864ac-d6e2-4d85-aa84-361f51b944eb" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.55:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106027 4985 patch_prober.go:28] interesting pod/monitoring-plugin-868c9846bf-6bwkl container/monitoring-plugin namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106097 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" podUID="54abc3c0-c9d2-49a3-bc29-854369637b99" containerName="monitoring-plugin" probeResult="failure" output="Get \"https://10.217.0.79:9443/health\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106486 4985 patch_prober.go:28] interesting pod/logging-loki-gateway-76696895d9-c6d96 container/opa namespace/openshift-logging: Readiness probe status=failure output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.106570 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-logging/logging-loki-gateway-76696895d9-c6d96" podUID="02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b" containerName="opa" probeResult="failure" output="Get \"https://10.217.0.56:8083/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.188996 4985 scope.go:117] "RemoveContainer" containerID="e91c414e4bddd6fb7b100b376f20e51c053f866b5e844a819f4081df4b77080f" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.296373 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:25 crc kubenswrapper[4985]: E0128 20:04:25.326672 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.545976 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.546041 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.546173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.547787 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Liveness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548182 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548241 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.548440 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-jrf9w" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.600613 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-distributor-5f678c8dd6-2755m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.601173 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-query-frontend-69d9546745-pcd6x" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.601575 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-logging/logging-loki-querier-76788598db-dkn9m" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673389 4985 patch_prober.go:28] interesting pod/perses-operator-5bf474d74f-j7z4h container/perses-operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673448 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" podUID="971845b8-805d-4b4a-a8fd-14f263f17695" containerName="perses-operator" probeResult="failure" output="Get \"http://10.217.0.14:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.673522 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.734677 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.734805 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerName="ceilometer-notification-agent" probeResult="failure" output="command timed out" Jan 28 20:04:25 crc kubenswrapper[4985]: E0128 20:04:25.783374 4985 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26777afd_4d9f_4ebb_b8ed_0be018fa5a17.slice/crio-efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70124ff4_00b0_41ef_947d_55eda7af02db.slice/crio-6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929.scope\": RecentStats: unable to find data in memory cache]" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.841871 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Jan 28 20:04:25 crc kubenswrapper[4985]: I0128 20:04:25.859653 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-j7z4h" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.012094 4985 generic.go:334] "Generic (PLEG): container finished" podID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerID="efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.012276 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerDied","Data":"efcdb5995ad8535fb26c939596ae0288fe4108bc695625292cdb108a91bd2093"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.015156 4985 generic.go:334] "Generic (PLEG): container finished" podID="70124ff4-00b0-41ef-947d-55eda7af02db" containerID="6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.015270 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerDied","Data":"6af011f55a64374575ea0cae6d33d823b0facc6e20d048b8a1587919c0634929"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.017632 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerID="7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089" exitCode=137 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.017740 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerDied","Data":"7e9f8feabc8f90d4cc467e5a3a22c744a7cb51080d65e7cc9ae61b59a79f0089"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.019554 4985 generic.go:334] "Generic (PLEG): container finished" podID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerID="d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23" exitCode=0 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.019615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerDied","Data":"d717b3927ce83af8ba73330be9f868092fe0fdbdd83aacdbcf2ed308742ebd23"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.097990 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-j6799_db632812-bc0d-41f2-9c01-a19d40eb69be/console-operator/0.log" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098050 4985 generic.go:334] "Generic (PLEG): container finished" podID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerID="08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d" exitCode=1 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098193 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerDied","Data":"08a0795107d17d55b403752643a479ee0f629b233d8b8ff0a9ced0a20942f05d"} Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098842 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="operator" containerStatusID={"Type":"cri-o","ID":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" containerMessage="Container operator failed liveness probe, will be restarted" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.098881 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" containerID="cri-o://22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd" gracePeriod=30 Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.108024 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.387416 4985 patch_prober.go:28] interesting pod/thanos-querier-5695687f7c-8tcz2 container/kube-rbac-proxy-web namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.387843 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/thanos-querier-5695687f7c-8tcz2" podUID="1a0dd00c-a59d-4e21-968c-b1a7b1198758" containerName="kube-rbac-proxy-web" probeResult="failure" output="Get \"https://10.217.0.76:9091/-/ready\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.550787 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-6b9fb5fdcb-rbn84" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798418 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Liveness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798475 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798767 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798811 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798889 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.798968 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.800091 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="oauth-openshift" containerStatusID={"Type":"cri-o","ID":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" containerMessage="Container oauth-openshift failed liveness probe, will be restarted" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.800533 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:26 crc kubenswrapper[4985]: I0128 20:04:26.965901 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.048446 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.048464 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/barbican-operator-controller-manager-7f86f8796f-ww4nj" podUID="4fa1b302-aad3-4e6e-9cd2-bba65262c1e8" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.103:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.113419 4985 generic.go:334] "Generic (PLEG): container finished" podID="1310770f-7cb7-4874-b2a0-4ef733911716" containerID="6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.113491 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerDied","Data":"6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.114674 4985 scope.go:117] "RemoveContainer" containerID="6e92c8c3af43ff2712b0f8ed60df9fc8862bc534e5395b1207bb47f744084f5b" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.124778 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" event={"ID":"cae1c988-06ab-4748-a62d-5bd7301b2c8d","Type":"ContainerStarted","Data":"0952d014831debce05e55414a932c95eac7cd0ff7fd38f0c9f8e18d35ab19dca"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.124930 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.128781 4985 generic.go:334] "Generic (PLEG): container finished" podID="70329607-4bbe-43ad-bb7a-2b62f26af473" containerID="b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.128840 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerDied","Data":"b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.129628 4985 scope.go:117] "RemoveContainer" containerID="b40c5de86bd5ee489a9235ce7345e2de0ac05a1a4eb0def7135cf083a63627f0" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.130423 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.130545 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.134154 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" event={"ID":"57ef54a5-9891-4f69-9907-b726d30d4006","Type":"ContainerStarted","Data":"1482f4a5a51d8ed6befa36bf3f466f86f4bfceb1974e8d4d9ca30bdf3999b605"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.134769 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.137045 4985 generic.go:334] "Generic (PLEG): container finished" podID="359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3" containerID="33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.137111 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerDied","Data":"33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.138265 4985 scope.go:117] "RemoveContainer" containerID="33e8754f74c0d539b6d740cc1480faa9b0b2b64b42c058d6a29292cd2a6ebd3c" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.140601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.150882 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.150942 4985 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e" exitCode=1 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.151076 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.151175 4985 scope.go:117] "RemoveContainer" containerID="e5970423297390bbeb0badf41f43cb386222c076517f911b63f9e919ad9f09db" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.152795 4985 scope.go:117] "RemoveContainer" containerID="0025f144f3fa7cc81c86c1fe0e47ad15fbc5caa56b23b223f51fe0e0fd77569e" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.155736 4985 generic.go:334] "Generic (PLEG): container finished" podID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerID="35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5" exitCode=0 Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.155804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerDied","Data":"35166b582511c0cb6470e0cf1786001c7eb41cdc45c00f7f9d0384210b660de5"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.168862 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" event={"ID":"81fa949b-5c24-44da-aa29-bd34bcc39d6e","Type":"ContainerStarted","Data":"991dbcbdd632c9448a6c1e6c2ea946fb4562580affccd884b294119803a1706e"} Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.169061 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171431 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" podUID="7ef21481-ade5-436a-ae3a-f284a7e438d3" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.102:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171548 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171449 4985 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171804 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.171958 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" podUID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.106:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172404 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172509 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/healthz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172521 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172549 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172556 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172593 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172625 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172638 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.173164 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" podUID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.117:8081/readyz\": dial tcp 10.217.0.117:8081: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.172441 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.178306 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.289038 4985 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.289093 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez?exclude=etcd\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.300573 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.300606 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.307791 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.327392 4985 patch_prober.go:28] interesting pod/console-74779d9b4-2xxwx container/console namespace/openshift-console: Readiness probe status=failure output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.327459 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/console-74779d9b4-2xxwx" podUID="6b348b0a-4b9a-4216-adbf-02bcefe1f011" containerName="console" probeResult="failure" output="Get \"https://10.217.0.138:8443/health\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.365598 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.365651 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:27 crc kubenswrapper[4985]: E0128 20:04:27.444002 4985 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.572430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.594446 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.594529 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" podUID="873dc5cd-5c8e-417e-b99a-a52dfcfd701b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.113:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.616309 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.700619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-7478f7dbf9-7gfrh" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.732296 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.803430 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 20:04:27 crc kubenswrapper[4985]: I0128 20:04:27.803732 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.019799 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.033805 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.141972 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142377 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142885 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:28 crc kubenswrapper[4985]: E0128 20:04:28.142933 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a is running failed: container process not found" probeType="Readiness" pod="openstack-operators/openstack-operator-index-wnjfp" podUID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerName="registry-server" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.214318 4985 generic.go:334] "Generic (PLEG): container finished" podID="697da6ae-2950-468c-82e9-bcb1a1af61e7" containerID="bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.214420 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerDied","Data":"bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.215483 4985 scope.go:117] "RemoveContainer" containerID="bff91fc4047ca8cb0c7f5c491bb739bdfbe2ef37ed14ecab78cbc847a02193b4" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.226777 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.227013 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" podUID="91971c24-6187-432c-84ba-65dba69b4598" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.118:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.232940 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" podUID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.101:8081/readyz\": dial tcp 10.217.0.101:8081: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.235376 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" event={"ID":"fc080bc5-4b4f-4405-b458-7450aaf8714b","Type":"ContainerStarted","Data":"40e683da5f6dfbf5eb0e698cbdf59d61756a5c2415678d0fa46c39dcbbf52f16"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.235618 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.242052 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" podUID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.94:8080/readyz\": dial tcp 10.217.0.94:8080: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.262929 4985 generic.go:334] "Generic (PLEG): container finished" podID="82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62" containerID="8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.262982 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerDied","Data":"8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.264995 4985 scope.go:117] "RemoveContainer" containerID="8a3f19cb6aa7abaef144114e6dd8bdb0d9b95990c08eded3c8ad0a1adc11123e" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.277323 4985 generic.go:334] "Generic (PLEG): container finished" podID="c77a825c-f720-48a7-b74f-49b16e3ecbed" containerID="c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.277413 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerDied","Data":"c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.278166 4985 scope.go:117] "RemoveContainer" containerID="c7994e4e9289d830d3d2b83f6fe38b4798e6db43a7a5f82ef83d020e4a399d26" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.282520 4985 generic.go:334] "Generic (PLEG): container finished" podID="99b88683-3e0a-4afa-91ab-71feac27fba1" containerID="1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.282643 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerDied","Data":"1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.284720 4985 scope.go:117] "RemoveContainer" containerID="1929e793821573d3c1a565d61317bcfad5538b41e79ae8732d91df7c5e2173b2" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.286891 4985 generic.go:334] "Generic (PLEG): container finished" podID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.286959 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerDied","Data":"acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.292057 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-8f79k" event={"ID":"5fd77adb-e801-4d3f-ac61-64615952aebd","Type":"ContainerStarted","Data":"f7d81ad6f3093a262aa8648649aa0c6f2729bd2194c460388848a6793da65337"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.295332 4985 generic.go:334] "Generic (PLEG): container finished" podID="d4d6e990-839d-4186-9382-1a67922556df" containerID="63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.295404 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerDied","Data":"63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.310021 4985 scope.go:117] "RemoveContainer" containerID="63ac9ba384926938b30ecfda1c6080eb12ddc04d1c11ca3a283a65a2c51b023d" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.313728 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"0a346d5d0650a73ed5f79fd8579ceb35d9e12fbd8bd81d25f6fc533d308cdac7"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.318411 4985 generic.go:334] "Generic (PLEG): container finished" podID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerID="f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.318476 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerDied","Data":"f5ff21eae212661230e0f400cfd444bde35cb9b2316c59ec3f7a4c7fa2274b70"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.327160 4985 generic.go:334] "Generic (PLEG): container finished" podID="b5a0c28d-1434-40f0-8759-d76b65dc2c30" containerID="11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.327226 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerDied","Data":"11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.336603 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:28 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:28 crc kubenswrapper[4985]: > Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.336670 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.368674 4985 generic.go:334] "Generic (PLEG): container finished" podID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.368763 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerDied","Data":"f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.372479 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="registry-server" containerStatusID={"Type":"cri-o","ID":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} pod="openshift-marketplace/redhat-operators-spssk" containerMessage="Container registry-server failed startup probe, will be restarted" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.372526 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" containerID="cri-o://2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" gracePeriod=30 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.373364 4985 scope.go:117] "RemoveContainer" containerID="11f64e6924e35c8dac9934d956caaaa9c36e16ee58665f9b1149145a0715d500" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.374457 4985 generic.go:334] "Generic (PLEG): container finished" podID="50682373-a3d7-491e-84a0-1d5613ee2e8a" containerID="ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77" exitCode=1 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.374574 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerDied","Data":"ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.378411 4985 scope.go:117] "RemoveContainer" containerID="ff10dd6aec762e5c6f8ac00bc0e5212cc4c9ba6fe7bf3a0a1e2f0ca6c68d8b77" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.389859 4985 generic.go:334] "Generic (PLEG): container finished" podID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerID="22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd" exitCode=0 Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.391355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerDied","Data":"22bb6e2fff06e8c5d79d9d6c748a0ba6b6268071593344e6ef0465f43decebdd"} Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.392063 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.392117 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.394471 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.394503 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.736816 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Readiness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737194 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737807 4985 patch_prober.go:28] interesting pod/prometheus-operator-admission-webhook-f54c54754-mttz8 container/prometheus-operator-admission-webhook namespace/openshift-monitoring: Liveness probe status=failure output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" start-of-body= Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.737875 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" podUID="81fa949b-5c24-44da-aa29-bd34bcc39d6e" containerName="prometheus-operator-admission-webhook" probeResult="failure" output="Get \"https://10.217.0.70:8443/healthz\": dial tcp 10.217.0.70:8443: connect: connection refused" Jan 28 20:04:28 crc kubenswrapper[4985]: I0128 20:04:28.997302 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e" containerName="kube-state-metrics" probeResult="failure" output="Get \"https://10.217.0.254:8081/readyz\": dial tcp 10.217.0.254:8081: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.278500 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-68b9ccc946-rk65w" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.339181 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-qlsnv" podUID="66ed71ac-c9a1-4130-bb76-eb5fc111f72a" containerName="controller" probeResult="failure" output="Get \"http://127.0.0.1:7572/metrics\": dial tcp 127.0.0.1:7572: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.356337 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.361022 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.369364 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" containerID="acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.369414 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acd8404035d60c13b004d9683afd64bbf18c6d26a548cfdba55e76448414796c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.373359 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" podUID="f6ebe169-8b20-4d94-99b7-96afffcb5118" containerName="frr-k8s-webhook-server" probeResult="failure" output="Get \"http://10.217.0.96:7572/metrics\": dial tcp 10.217.0.96:7572: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.440474 4985 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.440562 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7d10e722093917b94f3a479e3c814cf9428cf0d3207314c8564f19b4b94e826c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.443916 4985 generic.go:334] "Generic (PLEG): container finished" podID="367b6525-0367-437a-9fe3-b2007411f4af" containerID="62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.443961 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerDied","Data":"62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.448011 4985 scope.go:117] "RemoveContainer" containerID="62135ee7a2eb606526c37bb8ddcd9bc19db80c6717a626f58c7287903e72ecfa" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449119 4985 generic.go:334] "Generic (PLEG): container finished" podID="cc7f29e1-e6e0-45a0-920a-4b18d8204c65" containerID="b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449161 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerDied","Data":"b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.449484 4985 scope.go:117] "RemoveContainer" containerID="b4af6b1594b7467f446e940a66763ef0f6b702bf026796c5550c43aad291ee7c" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.482085 4985 generic.go:334] "Generic (PLEG): container finished" podID="b29b2a3b-ca12-4e1c-8816-0d28cebe2dde" containerID="c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.482147 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerDied","Data":"c6e66f05a0d16e3fe2371e96f9a7cf894276603fbbf1aac905bd7a1b74d22b3b"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.494599 4985 generic.go:334] "Generic (PLEG): container finished" podID="9c7284ab-b40f-4275-b85e-77aebd660135" containerID="ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.494695 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerDied","Data":"ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.495448 4985 scope.go:117] "RemoveContainer" containerID="ac9d4b13d281d4e9fb7fc67135b7b9665a8e3d5bfc5600b7571ded9088424b3d" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.513730 4985 generic.go:334] "Generic (PLEG): container finished" podID="38846228-cec9-4a59-b9bb-c766121dacde" containerID="e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05" exitCode=1 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.513829 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerDied","Data":"e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.515680 4985 scope.go:117] "RemoveContainer" containerID="e3fa9329be40e8e7c004d6aea5bd6091de66c9c6bb481177d817723d553d5c05" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.524133 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.536123 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.542692 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.544068 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" containerID="f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9" cmd=["grpc_health_probe","-addr=:50051"] Jan 28 20:04:29 crc kubenswrapper[4985]: E0128 20:04:29.544112 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of f42e089663307d421c2a7372509e38947f722b23ea96175ddf49f72d3082bbb9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.547793 4985 generic.go:334] "Generic (PLEG): container finished" podID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerID="dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.547958 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerDied","Data":"dcd1b7b2c9b099a64b97b202bb9f7fd3e0b1bcb3e84ef11fdc826b0963e66089"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.557528 4985 generic.go:334] "Generic (PLEG): container finished" podID="3314cb32-9bb8-46fd-b28e-5a6e9b779fa7" containerID="a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.557755 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerDied","Data":"a588eae6aca381c5d9ac38092dcee696ce64a70a8313bff5898eff2783e0af0a"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.558490 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.558532 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564324 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" containerID="cri-o://c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" gracePeriod=25 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564553 4985 generic.go:334] "Generic (PLEG): container finished" podID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerID="03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.564596 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerDied","Data":"03338a45259e63ff86a5b162e1f76627fc9bb12f10aaf142f4c25f67a1bbfd5c"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.567489 4985 generic.go:334] "Generic (PLEG): container finished" podID="be08d23e-d6c9-4b42-904b-c36b05dfc316" containerID="9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.567526 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerDied","Data":"9cef7e212ac2841b128f86d6ec36fe2a3490809adf860dd313b564257c0ad99b"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.576576 4985 generic.go:334] "Generic (PLEG): container finished" podID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerID="4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea" exitCode=0 Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.576879 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerDied","Data":"4c2347925908cece1c999f90b8a277d5f7b9d3d6eceb91e039c8ca2437637fea"} Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.924184 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:29 crc kubenswrapper[4985]: I0128 20:04:29.924713 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.020512 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" containerID="cri-o://e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" gracePeriod=23 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.328334 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.328408 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.364844 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.365281 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.582039 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.582113 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.587287 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerDied","Data":"9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.587221 4985 generic.go:334] "Generic (PLEG): container finished" podID="91971c24-6187-432c-84ba-65dba69b4598" containerID="9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280" exitCode=1 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.588318 4985 scope.go:117] "RemoveContainer" containerID="9d2c97996374895a55b806ee971623886630ad28da6fcc1d054133f6f6157280" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.590278 4985 generic.go:334] "Generic (PLEG): container finished" podID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerID="9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48" exitCode=0 Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.590371 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerDied","Data":"9ff56c9523f5bafd270d42d2d854367fe80b33c8d2f772d856a6ab4876f1fa48"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.592887 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" event={"ID":"fa42b50c-59ed-4523-a6a0-994a72ff7071","Type":"ContainerStarted","Data":"e7d9191e6b961711762d840332431117287250aed579dab83322ef2d28ba23f5"} Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.627031 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Readiness probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]backend-http ok Jan 28 20:04:30 crc kubenswrapper[4985]: [+]has-synced ok Jan 28 20:04:30 crc kubenswrapper[4985]: [-]process-running failed: reason withheld Jan 28 20:04:30 crc kubenswrapper[4985]: healthz check failed Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.627078 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635477 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635543 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635660 4985 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-4lnjx container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.635683 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" podUID="cae1c988-06ab-4748-a62d-5bd7301b2c8d" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 20:04:30 crc kubenswrapper[4985]: I0128 20:04:30.965465 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="metallb-system/speaker-6lq6d" podUID="b5094b56-07e5-45db-8a13-ce7b931b861e" containerName="speaker" probeResult="failure" output="Get \"http://localhost:29150/metrics\": dial tcp [::1]:29150: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.078929 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.080161 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.081795 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:31 crc kubenswrapper[4985]: E0128 20:04:31.081835 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.108698 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.610471 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" event={"ID":"c77a825c-f720-48a7-b74f-49b16e3ecbed","Type":"ContainerStarted","Data":"783df7ef6709d49ba1fdd15972f0559543c9194300844aff0682556076cd0e99"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.610973 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.615804 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7461c47253b22ccd04b9ecdb708f52301f9e2a05703634013c41a2bdbfa6b730"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.618470 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.621990 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-6lq6d" event={"ID":"b5094b56-07e5-45db-8a13-ce7b931b861e","Type":"ContainerStarted","Data":"6833045965b4db5f71a89941eb40c148c967fd6106d608b51de410b637f7ea88"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.622203 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.624884 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" event={"ID":"70329607-4bbe-43ad-bb7a-2b62f26af473","Type":"ContainerStarted","Data":"707748125d7191c905a96f0931d8a59affa40e3297c907034f42d4fbc3b0e1de"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.625163 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.628697 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" event={"ID":"359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3","Type":"ContainerStarted","Data":"a5abdb6d118d0f853fdfd9b16a03305d4c46560c14c141eca51313f158412064"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.629587 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.632542 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" event={"ID":"d4d6e990-839d-4186-9382-1a67922556df","Type":"ContainerStarted","Data":"78709126d809c26d97d48a9f4bf4e58061c28186d34472b7d635d7f358f177e2"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.639011 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.647558 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" event={"ID":"4845499d-139f-4839-9f9f-4d77c7f0ae37","Type":"ContainerStarted","Data":"9a6e4cf0fcfff4838a57e7153aaff862541a3bfd97e0a91bf0b7f364310d1fcb"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648239 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648407 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.648449 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.651835 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" event={"ID":"70124ff4-00b0-41ef-947d-55eda7af02db","Type":"ContainerStarted","Data":"5f60dfe81d3f071462135af4af4128b52d2a308acb3162c63d4863d9a512f52f"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.651920 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.652171 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.652222 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.654980 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" event={"ID":"99b88683-3e0a-4afa-91ab-71feac27fba1","Type":"ContainerStarted","Data":"543e63830331d8d82aea0da0ca38f4216158dd9569b2059f39ed95de131ea709"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.655235 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.662661 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-58897d9998-j6799_db632812-bc0d-41f2-9c01-a19d40eb69be/console-operator/0.log" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.662909 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-j6799" event={"ID":"db632812-bc0d-41f2-9c01-a19d40eb69be","Type":"ContainerStarted","Data":"c86ed9f518788a5f9945d537e318887017b2117f5135e704d75f7f724eb6d1f0"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.663000 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.667321 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" event={"ID":"f6ebe169-8b20-4d94-99b7-96afffcb5118","Type":"ContainerStarted","Data":"6030becdcf765cf15b70923de98b03ac3f2561b8e5be80b8946bd77d9ef89412"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.667717 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.669848 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.669904 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.671093 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" event={"ID":"82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62","Type":"ContainerStarted","Data":"cf101369cf85c9674f018e8e895e73945a08e7b8ec5e2e56aeee4bfc9a2e83bd"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.677137 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" event={"ID":"26777afd-4d9f-4ebb-b8ed-0be018fa5a17","Type":"ContainerStarted","Data":"f814f4bbff8e72532ff093711ea65a354dd1db8cca317c54d4411dbc6c778eb3"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.677459 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.682802 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" event={"ID":"983beebe-f0c3-4fba-9861-0ea007559cc5","Type":"ContainerStarted","Data":"95649f7a5a4ff9cfecec97fc9c5e21fda60ba14f5af89649189d36a23b73d4e0"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683283 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683795 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.683845 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.686181 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" event={"ID":"50682373-a3d7-491e-84a0-1d5613ee2e8a","Type":"ContainerStarted","Data":"a3bd17b8623ecd9442143c4135a7a62281759fdaba53645b9e9dc41a8d3d923c"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.686396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.692077 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" event={"ID":"697da6ae-2950-468c-82e9-bcb1a1af61e7","Type":"ContainerStarted","Data":"59c6fb267914bdebe741eccfd6ee9bce6f237394911b1eb50ef6e99d5ba8c574"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.693512 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.696868 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.697977 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"79dfe7194b0e62b23b4d4c5b70bd5155add0435bc59cf05863ad051dafed8b52"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.700107 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.703924 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pcb4d" event={"ID":"be08d23e-d6c9-4b42-904b-c36b05dfc316","Type":"ContainerStarted","Data":"8300f6020fc08f440ad96282b353b926db5a3a000c1da77ecce205a6dbdb5ce9"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.712355 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-qlsnv" event={"ID":"66ed71ac-c9a1-4130-bb76-eb5fc111f72a","Type":"ContainerStarted","Data":"c3903129a5e050768bf859bb1f16a9a4faa90b6f347027f166bd372d2864fc1e"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.713176 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.715053 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" event={"ID":"1310770f-7cb7-4874-b2a0-4ef733911716","Type":"ContainerStarted","Data":"795b749a3a33ce2f2e0e93a9b99bed6b6918d451c67149a49a303136ad19d09d"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.715343 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.721751 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" event={"ID":"b5a0c28d-1434-40f0-8759-d76b65dc2c30","Type":"ContainerStarted","Data":"b6fd30c1f3fa4c72fa4fad22e370eedd84788dd55134f39780fa4592a5d6f2e8"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.722619 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.729433 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" event={"ID":"a0590b9a-abcc-4541-9914-675dc0ca1976","Type":"ContainerStarted","Data":"46a35fb2be17a2d04681a0d0859480bbc515d0d735d4c5a112baba7d5a412ce1"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730733 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730797 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.730820 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734707 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e","Type":"ContainerStarted","Data":"62cf1c8a35444574b7b1bf54c306a32a089ff1b805c5da39eba8f5950a3493b1"} Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734932 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.734989 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.735093 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:31 crc kubenswrapper[4985]: I0128 20:04:31.735125 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.452748 4985 patch_prober.go:28] interesting pod/loki-operator-controller-manager-85fc96dbd6-9qljj container/manager namespace/openshift-operators-redhat: Readiness probe status=failure output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.453118 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" podUID="fc080bc5-4b4f-4405-b458-7450aaf8714b" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.48:8081/readyz\": dial tcp 10.217.0.48:8081: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.585006 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.586743 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.594956 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:32 crc kubenswrapper[4985]: E0128 20:04:32.595051 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.747671 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-7s7s2" event={"ID":"38846228-cec9-4a59-b9bb-c766121dacde","Type":"ContainerStarted","Data":"08be1fbcf80783a420a679de05934fc91371f37013861c0aa0625fe62577273c"} Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780427 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qnrsp_cb7bad3c-725d-4a80-b398-140c6acf3825/router/0.log" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780717 4985 generic.go:334] "Generic (PLEG): container finished" podID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerID="8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a" exitCode=137 Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.780877 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerDied","Data":"8451ecb74d3c5ee99cec821aaa47c7970df959ecd8df15b6c7cf52a433376f5a"} Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781951 4985 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-hvkcw container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781953 4985 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-tlrkn container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782002 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" podUID="4845499d-139f-4839-9f9f-4d77c7f0ae37" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.61:8080/healthz\": dial tcp 10.217.0.61:8080: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782019 4985 patch_prober.go:28] interesting pod/console-operator-58897d9998-j6799 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782032 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" podUID="70124ff4-00b0-41ef-947d-55eda7af02db" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.34:5443/healthz\": dial tcp 10.217.0.34:5443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782058 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-j6799" podUID="db632812-bc0d-41f2-9c01-a19d40eb69be" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/readyz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781967 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782115 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.781998 4985 patch_prober.go:28] interesting pod/route-controller-manager-5549b68d6f-t2f7p container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782147 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" podUID="983beebe-f0c3-4fba-9861-0ea007559cc5" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.69:8443/healthz\": dial tcp 10.217.0.69:8443: connect: connection refused" Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782568 4985 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-lghqh container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" start-of-body= Jan 28 20:04:32 crc kubenswrapper[4985]: I0128 20:04:32.782594 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" podUID="fa42b50c-59ed-4523-a6a0-994a72ff7071" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.37:8443/healthz\": dial tcp 10.217.0.37:8443: connect: connection refused" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.364789 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.365091 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.570581 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.792758 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-wnjfp" event={"ID":"3314cb32-9bb8-46fd-b28e-5a6e9b779fa7","Type":"ContainerStarted","Data":"a849f24b9864581dd1fe2b639b6520564fdc5a822b8e8b2ec44a366404a85f21"} Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.793212 4985 patch_prober.go:28] interesting pod/controller-manager-656679f4c7-mmrtg container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" start-of-body= Jan 28 20:04:33 crc kubenswrapper[4985]: I0128 20:04:33.793266 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" podUID="a0590b9a-abcc-4541-9914-675dc0ca1976" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": dial tcp 10.217.0.66:8443: connect: connection refused" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.323691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-868c9846bf-6bwkl" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.338103 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.390327 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.466768 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.466818 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.805929 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" event={"ID":"367b6525-0367-437a-9fe3-b2007411f4af","Type":"ContainerStarted","Data":"383dc81fb4b4a5055cd5226673e95c8f2bf67e8261407836fb4486ddc158608e"} Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.808267 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" event={"ID":"cc7f29e1-e6e0-45a0-920a-4b18d8204c65","Type":"ContainerStarted","Data":"b0f57d31b5ba5bdf7f84edda1d7123574e48b9a33672903e2bca66b75ebad7c3"} Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.808495 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:34 crc kubenswrapper[4985]: I0128 20:04:34.810561 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" event={"ID":"9c7284ab-b40f-4275-b85e-77aebd660135","Type":"ContainerStarted","Data":"7dd6cc1f217c705b4dd69f055fd838f5aa8de08ac32385e99de36645a10be038"} Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.722677 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.850075 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 20:04:35 crc kubenswrapper[4985]: I0128 20:04:35.850517 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.108061 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" podUID="26777afd-4d9f-4ebb-b8ed-0be018fa5a17" containerName="cert-manager-webhook" probeResult="failure" output="Get \"http://10.217.0.44:6080/healthz\": dial tcp 10.217.0.44:6080: connect: connection refused" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.140759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-77d5c5b54f-6skp6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.365509 4985 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-gm5gt container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.365779 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" podUID="715ad1e8-6659-4a18-a007-ad31ffa7044e" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.383926 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-b8b6d4659-hktv5" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.514666 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-78d58447c5-dlssr" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.691807 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.692151 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.693353 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cinder-scheduler" containerStatusID={"Type":"cri-o","ID":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} pod="openstack/cinder-scheduler-0" containerMessage="Container cinder-scheduler failed liveness probe, will be restarted" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.693426 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" containerID="cri-o://ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e" gracePeriod=30 Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.761767 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763768 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763789 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763827 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-utilities" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763836 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-utilities" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.763863 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-content" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.763868 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="extract-content" Jan 28 20:04:36 crc kubenswrapper[4985]: E0128 20:04:36.764155 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764169 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764634 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="c901d430-df5f-4afa-8a40-9ed18d2ad552" containerName="keystone-cron" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.764675 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e90a8845-3321-45ae-8c9d-524afa36cdd7" containerName="registry-server" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.770261 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881284 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881492 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.881661 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.890654 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z2xq5" event={"ID":"d59677ee-1cc3-4635-a126-0383e56d3fc0","Type":"ContainerStarted","Data":"8e001b6717573e47dde036853c9600484c643d17dfa3271afbc9f87f864ba6a8"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.910530 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4fx27" event={"ID":"478fc51e-7963-4ba3-a5ec-c2b7045b8353","Type":"ContainerStarted","Data":"327771973a3d1d6a1a4aac847d6c2739715a8a362c1daaaa13d4585cae663b69"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.948644 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"b29b2a3b-ca12-4e1c-8816-0d28cebe2dde","Type":"ContainerStarted","Data":"f1ef70d944bea9183ea8dcafb63b98535f5e207813d52b5b82a42152b36c3f5a"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.963896 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-5444994796-qnrsp_cb7bad3c-725d-4a80-b398-140c6acf3825/router/0.log" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.963967 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-qnrsp" event={"ID":"cb7bad3c-725d-4a80-b398-140c6acf3825","Type":"ContainerStarted","Data":"013f0faf90e02d1c24593266d641dd3c59feb576f4d2fe401f9b506336ce4275"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.967628 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" event={"ID":"91971c24-6187-432c-84ba-65dba69b4598","Type":"ContainerStarted","Data":"7fd72ebd7aa35111b94e40f5fdc7771a59db814f8d1383cc484b15cf6b357e93"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.968471 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989397 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989480 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.989572 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.991097 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-catalog-content\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.992041 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bad9c3c9-3333-4c1b-a020-2322b7baae36-utilities\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.993639 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" event={"ID":"715ad1e8-6659-4a18-a007-ad31ffa7044e","Type":"ContainerStarted","Data":"94549d4f8e9257f2f1d2669248959bfed37ae938a6f3fe3e0192d7940abaaabe"} Jan 28 20:04:36 crc kubenswrapper[4985]: I0128 20:04:36.994725 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.013720 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" event={"ID":"a23ac89d-75e4-4511-afaa-ef9d6205a672","Type":"ContainerStarted","Data":"87ecffcc4f224ebf860a9f0c28bb447716191ca7e79dcd0ed492e3dd7b582097"} Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.013775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.014134 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.014176 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.054281 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-6f75f45d54-v5mmf" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.093377 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.132715 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-656679f4c7-mmrtg" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.140780 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8m4mp\" (UniqueName: \"kubernetes.io/projected/bad9c3c9-3333-4c1b-a020-2322b7baae36-kube-api-access-8m4mp\") pod \"certified-operators-v2zt6\" (UID: \"bad9c3c9-3333-4c1b-a020-2322b7baae36\") " pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.154909 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5549b68d6f-t2f7p" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.263707 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:37 crc kubenswrapper[4985]: E0128 20:04:37.264082 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.295138 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-74c974475f-b9j67" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.311385 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-69797bbcbd-xwzkh" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.371105 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-xzkhh" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.430593 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.572700 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.627172 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.629891 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.629984 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.731017 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.735013 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:37 crc kubenswrapper[4985]: I0128 20:04:37.960604 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-694cf4f878-5zqpj" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.023936 4985 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-nfhqj container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" start-of-body= Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.023995 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" podUID="a23ac89d-75e4-4511-afaa-ef9d6205a672" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.12:8081/healthz\": dial tcp 10.217.0.12:8081: connect: connection refused" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.136558 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.136599 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.152017 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.160545 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-687c66fd56-xdvhx" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.597278 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.616309 4985 patch_prober.go:28] interesting pod/router-default-5444994796-qnrsp container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.620441 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-qnrsp" podUID="cb7bad3c-725d-4a80-b398-140c6acf3825" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.630386 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-fd7b78bd4-c2clz" Jan 28 20:04:38 crc kubenswrapper[4985]: I0128 20:04:38.745040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-f54c54754-mttz8" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.024225 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.074711 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-wnjfp" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.314772 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-gm5gt" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.367459 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.367521 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.377145 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-qlsnv" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.529465 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.529532 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.559488 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-hvkcw" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.623051 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:39 crc kubenswrapper[4985]: I0128 20:04:39.928040 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-j6799" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.041698 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.047319 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-qnrsp" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.089717 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-8f79k" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.344271 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-lghqh" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.452430 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:40 crc kubenswrapper[4985]: > Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.599543 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:40 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:40 crc kubenswrapper[4985]: > Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.638420 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-4lnjx" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.670427 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-tlrkn" Jan 28 20:04:40 crc kubenswrapper[4985]: I0128 20:04:40.966006 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-6lq6d" Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.083580 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.085707 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.088286 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:41 crc kubenswrapper[4985]: E0128 20:04:41.088343 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:41 crc kubenswrapper[4985]: I0128 20:04:41.111507 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-mwrk6" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.066777 4985 generic.go:334] "Generic (PLEG): container finished" podID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerID="ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e" exitCode=0 Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.066855 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerDied","Data":"ef7af7392a0a8e8daafa4c29f9a0b623ca6d2a81cb96174c2ed68ac2c092ef4e"} Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.298664 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-85fc96dbd6-9qljj" Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.583513 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584204 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584675 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:42 crc kubenswrapper[4985]: E0128 20:04:42.584732 4985 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0 is running failed: container process not found" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerName="galera" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.642685 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz" Jan 28 20:04:42 crc kubenswrapper[4985]: I0128 20:04:42.779873 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:04:42 crc kubenswrapper[4985]: W0128 20:04:42.787410 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbad9c3c9_3333_4c1b_a020_2322b7baae36.slice/crio-08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e WatchSource:0}: Error finding container 08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e: Status 404 returned error can't find the container with id 08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.081418 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"08b1b1e12469811d9c19f1e7452483bdf6acdac16131f7fb57f9e0c1435fe84e"} Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.102268 4985 generic.go:334] "Generic (PLEG): container finished" podID="b8253e52-6b52-45a9-b5d6-680d3dfbebe7" containerID="c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0" exitCode=0 Jan 28 20:04:43 crc kubenswrapper[4985]: I0128 20:04:43.102274 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerDied","Data":"c3e9db4f597df352a100c6a7be2c7f286582826c8b05db12887e9024b264c9e0"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.124612 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"b8253e52-6b52-45a9-b5d6-680d3dfbebe7","Type":"ContainerStarted","Data":"51e4ab062d26e9e62e405b43c5cfb6090cbfd4b202868d6cc4c9d661f9ad3c35"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.128973 4985 generic.go:334] "Generic (PLEG): container finished" podID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerID="a7f98dc1c4a3f422e11a1269332fcfae432cf598bd7e84e2b3508e5031e3a6e3" exitCode=0 Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.129023 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerDied","Data":"a7f98dc1c4a3f422e11a1269332fcfae432cf598bd7e84e2b3508e5031e3a6e3"} Jan 28 20:04:44 crc kubenswrapper[4985]: I0128 20:04:44.466402 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-nfhqj" Jan 28 20:04:45 crc kubenswrapper[4985]: I0128 20:04:45.970111 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-594c8c9d5d-fm7nr" Jan 28 20:04:46 crc kubenswrapper[4985]: I0128 20:04:46.505815 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5f4cd88d46-4smn2" Jan 28 20:04:46 crc kubenswrapper[4985]: I0128 20:04:46.733161 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-7bdb645866-7mtzf" Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.148568 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-79d5ccc684-qn5x9" Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.183008 4985 generic.go:334] "Generic (PLEG): container finished" podID="a808dc72-a951-4f07-a612-2fde39a49a30" containerID="ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e" exitCode=1 Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.183064 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerDied","Data":"ee163311dba6c1ce70ff2544f9371712e8075bba77bbad31800b493e5588741e"} Jan 28 20:04:47 crc kubenswrapper[4985]: I0128 20:04:47.576639 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.198466 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"07cf4e1d-9eb6-491a-90a5-dc30af589bc0","Type":"ContainerStarted","Data":"1f48d3ab4b19cf2cebcfdbbc33f325595adb0916611634a71eb5111f8e383743"} Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.203032 4985 generic.go:334] "Generic (PLEG): container finished" podID="99828525-9397-448d-9a51-bc0da88038ac" containerID="eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409" exitCode=137 Jan 28 20:04:48 crc kubenswrapper[4985]: I0128 20:04:48.203183 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerDied","Data":"eedf56963284f4f02b309064398b6a7be6c00026bb391ec849a54c864758f409"} Jan 28 20:04:49 crc kubenswrapper[4985]: I0128 20:04:49.371775 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-szgpw" Jan 28 20:04:50 crc kubenswrapper[4985]: I0128 20:04:50.470358 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:50 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:50 crc kubenswrapper[4985]: > Jan 28 20:04:50 crc kubenswrapper[4985]: I0128 20:04:50.600199 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:04:50 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:04:50 crc kubenswrapper[4985]: > Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.079638 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.081148 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.082870 4985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 20:04:51 crc kubenswrapper[4985]: E0128 20:04:51.082911 4985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerName="galera" Jan 28 20:04:51 crc kubenswrapper[4985]: W0128 20:04:51.575497 4985 logging.go:55] [core] [Channel #7181 SubChannel #7182]grpc: addrConn.createTransport failed to connect to {Addr: "/var/lib/kubelet/plugins/csi-hostpath/csi.sock", ServerName: "localhost", }. Err: connection error: desc = "transport: Error while dialing: dial unix /var/lib/kubelet/plugins/csi-hostpath/csi.sock: connect: connection refused" Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.843850 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" containerID="cri-o://47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95" gracePeriod=15 Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.901363 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.983840 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.983898 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984012 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984075 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984169 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984311 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984390 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984545 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.984588 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") pod \"a808dc72-a951-4f07-a612-2fde39a49a30\" (UID: \"a808dc72-a951-4f07-a612-2fde39a49a30\") " Jan 28 20:04:51 crc kubenswrapper[4985]: I0128 20:04:51.999042 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.000905 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data" (OuterVolumeSpecName: "config-data") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.006962 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.015007 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss" (OuterVolumeSpecName: "kube-api-access-f5tss") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "kube-api-access-f5tss". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.017077 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.078164 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.090652 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095041 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5tss\" (UniqueName: \"kubernetes.io/projected/a808dc72-a951-4f07-a612-2fde39a49a30-kube-api-access-f5tss\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095152 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095487 4985 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095640 4985 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095661 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095677 4985 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ca-certs\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.095689 4985 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a808dc72-a951-4f07-a612-2fde39a49a30-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.105494 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.110230 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a808dc72-a951-4f07-a612-2fde39a49a30" (UID: "a808dc72-a951-4f07-a612-2fde39a49a30"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.149037 4985 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198640 4985 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-ssh-key\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198673 4985 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.198683 4985 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a808dc72-a951-4f07-a612-2fde39a49a30-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.260809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a808dc72-a951-4f07-a612-2fde39a49a30","Type":"ContainerDied","Data":"8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840"} Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.260828 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.262143 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ac53f28924ef34914b8f13ae4189420fe54cce41ee264f85ce7e1f954e89840" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.263046 4985 generic.go:334] "Generic (PLEG): container finished" podID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerID="47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95" exitCode=0 Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.263090 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerDied","Data":"47b2958f11c39ade31c2e91339ddcd95d53ee549c27d8c34ef46c24ef5c02a95"} Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.264156 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.527182 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.584123 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.584610 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:52 crc kubenswrapper[4985]: I0128 20:04:52.661483 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.309557 4985 generic.go:334] "Generic (PLEG): container finished" podID="43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8" containerID="e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83" exitCode=0 Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.309864 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerDied","Data":"e908237238de9401304d927da08264aafa5d7ea536ccef88fe7a5946a5f93b83"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.313569 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.323908 4985 generic.go:334] "Generic (PLEG): container finished" podID="99828525-9397-448d-9a51-bc0da88038ac" containerID="82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4" exitCode=1 Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.323994 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerDied","Data":"82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.334363 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" event={"ID":"f077e962-d9b2-45c5-a87e-1dd03ad0378b","Type":"ContainerStarted","Data":"0a982d845a9f831e0c88084af06f221301b67133998c9991352ecbfc3bd42961"} Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.334432 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.335140 4985 patch_prober.go:28] interesting pod/oauth-openshift-56cf947455-bgjvj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" start-of-body= Jan 28 20:04:53 crc kubenswrapper[4985]: I0128 20:04:53.335203 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" podUID="f077e962-d9b2-45c5-a87e-1dd03ad0378b" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.68:6443/healthz\": dial tcp 10.217.0.68:6443: connect: connection refused" Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.344989 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.347888 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8","Type":"ContainerStarted","Data":"ed2f8091895e95a2db82aadc41dd96eee2d0cdbf5f2ca90e286001883ce27f4f"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.350663 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"4ee2d13f340a17f08093a19637dc0d1941ddfb300085d4915a7368b76c5f943f"} Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.351642 4985 scope.go:117] "RemoveContainer" containerID="82bed0d8a42bca7e53b39c9544bdc0936cdb44ffd82eeecb67a51d1676f725c4" Jan 28 20:04:54 crc kubenswrapper[4985]: I0128 20:04:54.583626 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-56cf947455-bgjvj" Jan 28 20:04:57 crc kubenswrapper[4985]: I0128 20:04:57.564573 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/cinder-scheduler-0" podUID="07cf4e1d-9eb6-491a-90a5-dc30af589bc0" containerName="cinder-scheduler" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.401611 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-5zj27" event={"ID":"99828525-9397-448d-9a51-bc0da88038ac","Type":"ContainerStarted","Data":"11542b426bbe009755598c19ce242a68de7b2bc4b2683f0e2c7891f10ceff9a3"} Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.404607 4985 generic.go:334] "Generic (PLEG): container finished" podID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerID="67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783" exitCode=0 Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.404640 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerDied","Data":"67e6340b7385cbd4895b294330f0737f97a1d0e6a21067e4bee9b734f5e32783"} Jan 28 20:04:58 crc kubenswrapper[4985]: I0128 20:04:58.960217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.147657 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.446776 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.451115 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" exitCode=137 Jan 28 20:04:59 crc kubenswrapper[4985]: I0128 20:04:59.452199 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.464112 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.465339 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerStarted","Data":"2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.468394 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-v2zt6" event={"ID":"bad9c3c9-3333-4c1b-a020-2322b7baae36","Type":"ContainerStarted","Data":"b501f5588865c688bdab98e0ea5fe0443eb390e5dbc5774e7319ee3d1a15949e"} Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.514543 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-v2zt6" podStartSLOduration=9.421382826 podStartE2EDuration="24.514522069s" podCreationTimestamp="2026-01-28 20:04:36 +0000 UTC" firstStartedPulling="2026-01-28 20:04:44.131721641 +0000 UTC m=+6694.958284482" lastFinishedPulling="2026-01-28 20:04:59.224860904 +0000 UTC m=+6710.051423725" observedRunningTime="2026-01-28 20:05:00.502176139 +0000 UTC m=+6711.328738960" watchObservedRunningTime="2026-01-28 20:05:00.514522069 +0000 UTC m=+6711.341084890" Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.588184 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:00 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:00 crc kubenswrapper[4985]: > Jan 28 20:05:00 crc kubenswrapper[4985]: I0128 20:05:00.595166 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:00 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:00 crc kubenswrapper[4985]: > Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.077343 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.077396 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.174695 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 20:05:01 crc kubenswrapper[4985]: I0128 20:05:01.596790 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.125088 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:02 crc kubenswrapper[4985]: E0128 20:05:02.126090 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.126110 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.126362 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a808dc72-a951-4f07-a612-2fde39a49a30" containerName="tempest-tests-tempest-tests-runner" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.127417 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.130232 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-hb5cc" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.160860 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.286685 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.286866 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389159 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389416 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.389928 4985 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.427824 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxfgp\" (UniqueName: \"kubernetes.io/projected/e5d86a77-6a87-4434-b571-f453639eb3a2-kube-api-access-dxfgp\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.438160 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"e5d86a77-6a87-4434-b571-f453639eb3a2\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.461884 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Jan 28 20:05:02 crc kubenswrapper[4985]: I0128 20:05:02.655679 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 20:05:03 crc kubenswrapper[4985]: I0128 20:05:03.195124 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Jan 28 20:05:03 crc kubenswrapper[4985]: I0128 20:05:03.513595 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e5d86a77-6a87-4434-b571-f453639eb3a2","Type":"ContainerStarted","Data":"0056f7f17642c2708b2035e699df1829c6fce321931b2d5124b59cba9c26e7c3"} Jan 28 20:05:05 crc kubenswrapper[4985]: I0128 20:05:05.002714 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:05 crc kubenswrapper[4985]: I0128 20:05:05.002943 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.317527 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:06 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:06 crc kubenswrapper[4985]: > Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.549794 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"e5d86a77-6a87-4434-b571-f453639eb3a2","Type":"ContainerStarted","Data":"7aaae0d8282a48328faa48d3e48327c860f6172702ab7ed9d8c2a0952e1bfa3b"} Jan 28 20:05:06 crc kubenswrapper[4985]: I0128 20:05:06.569447 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.7737185229999999 podStartE2EDuration="4.569425449s" podCreationTimestamp="2026-01-28 20:05:02 +0000 UTC" firstStartedPulling="2026-01-28 20:05:03.215614338 +0000 UTC m=+6714.042177159" lastFinishedPulling="2026-01-28 20:05:06.011321264 +0000 UTC m=+6716.837884085" observedRunningTime="2026-01-28 20:05:06.562744439 +0000 UTC m=+6717.389307260" watchObservedRunningTime="2026-01-28 20:05:06.569425449 +0000 UTC m=+6717.395988270" Jan 28 20:05:07 crc kubenswrapper[4985]: I0128 20:05:07.431918 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:07 crc kubenswrapper[4985]: I0128 20:05:07.432592 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:08 crc kubenswrapper[4985]: I0128 20:05:08.391430 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-74b956d56f-86jl5" Jan 28 20:05:08 crc kubenswrapper[4985]: I0128 20:05:08.544919 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v2zt6" podUID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:08 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:08 crc kubenswrapper[4985]: > Jan 28 20:05:10 crc kubenswrapper[4985]: I0128 20:05:10.413632 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:10 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:10 crc kubenswrapper[4985]: > Jan 28 20:05:10 crc kubenswrapper[4985]: I0128 20:05:10.585736 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-4fx27" podUID="478fc51e-7963-4ba3-a5ec-c2b7045b8353" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:10 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:10 crc kubenswrapper[4985]: > Jan 28 20:05:16 crc kubenswrapper[4985]: I0128 20:05:16.072713 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:16 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:16 crc kubenswrapper[4985]: > Jan 28 20:05:18 crc kubenswrapper[4985]: I0128 20:05:18.490112 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-v2zt6" podUID="bad9c3c9-3333-4c1b-a020-2322b7baae36" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:18 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:18 crc kubenswrapper[4985]: > Jan 28 20:05:19 crc kubenswrapper[4985]: I0128 20:05:19.596419 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:05:19 crc kubenswrapper[4985]: I0128 20:05:19.661175 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4fx27" Jan 28 20:05:20 crc kubenswrapper[4985]: I0128 20:05:20.413389 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-z2xq5" podUID="d59677ee-1cc3-4635-a126-0383e56d3fc0" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:20 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:20 crc kubenswrapper[4985]: > Jan 28 20:05:23 crc kubenswrapper[4985]: I0128 20:05:23.502830 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 20:05:26 crc kubenswrapper[4985]: I0128 20:05:26.056694 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:26 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:26 crc kubenswrapper[4985]: > Jan 28 20:05:27 crc kubenswrapper[4985]: I0128 20:05:27.482759 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:27 crc kubenswrapper[4985]: I0128 20:05:27.544566 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-v2zt6" Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.574455 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-v2zt6"] Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.723144 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.726956 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-mclkd" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" containerID="cri-o://d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" gracePeriod=2 Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.890345 4985 generic.go:334] "Generic (PLEG): container finished" podID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerID="d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" exitCode=0 Jan 28 20:05:28 crc kubenswrapper[4985]: I0128 20:05:28.890406 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8"} Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.407004 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.477412 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z2xq5" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.905714 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-mclkd" event={"ID":"1304efc2-5033-41c5-83b5-5df3edfde2f1","Type":"ContainerDied","Data":"9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89"} Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.904175 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.906621 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9065c3cedcf2c522ec02096a476095855bf69695fefcb13d3535bb45ef54bf89" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.957976 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.958025 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.958106 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") pod \"1304efc2-5033-41c5-83b5-5df3edfde2f1\" (UID: \"1304efc2-5033-41c5-83b5-5df3edfde2f1\") " Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.960620 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities" (OuterVolumeSpecName: "utilities") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:29 crc kubenswrapper[4985]: I0128 20:05:29.972481 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt" (OuterVolumeSpecName: "kube-api-access-4nhmt") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "kube-api-access-4nhmt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.008808 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1304efc2-5033-41c5-83b5-5df3edfde2f1" (UID: "1304efc2-5033-41c5-83b5-5df3edfde2f1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061544 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhmt\" (UniqueName: \"kubernetes.io/projected/1304efc2-5033-41c5-83b5-5df3edfde2f1-kube-api-access-4nhmt\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061578 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.061587 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1304efc2-5033-41c5-83b5-5df3edfde2f1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.919485 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-mclkd" Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.957215 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:30 crc kubenswrapper[4985]: I0128 20:05:30.973045 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-mclkd"] Jan 28 20:05:31 crc kubenswrapper[4985]: I0128 20:05:31.278782 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" path="/var/lib/kubelet/pods/1304efc2-5033-41c5-83b5-5df3edfde2f1/volumes" Jan 28 20:05:36 crc kubenswrapper[4985]: I0128 20:05:36.144209 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:36 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:36 crc kubenswrapper[4985]: > Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.110851 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113082 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-content" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113200 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-content" Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113305 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-utilities" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113366 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="extract-utilities" Jan 28 20:05:37 crc kubenswrapper[4985]: E0128 20:05:37.113535 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.113605 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.115046 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="1304efc2-5033-41c5-83b5-5df3edfde2f1" containerName="registry-server" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.119911 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136611 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-sg6vz"/"default-dockercfg-267h6" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136628 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sg6vz"/"openshift-service-ca.crt" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.136848 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-sg6vz"/"kube-root-ca.crt" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.227779 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.265787 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.265960 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.368579 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.368909 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.372953 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.402608 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"must-gather-9vwtc\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:37 crc kubenswrapper[4985]: I0128 20:05:37.445039 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.076795 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.100377 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:05:38 crc kubenswrapper[4985]: I0128 20:05:38.180809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"06c49318f7af370af69f8377123b97a103b8ab3290738fc3695d6344614a2de1"} Jan 28 20:05:46 crc kubenswrapper[4985]: I0128 20:05:46.178306 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" probeResult="failure" output=< Jan 28 20:05:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:05:46 crc kubenswrapper[4985]: > Jan 28 20:05:47 crc kubenswrapper[4985]: I0128 20:05:47.927658 4985 scope.go:117] "RemoveContainer" containerID="d1f355fd0c5fb9871aa2c5c6896e3fe364696f87e04f69db46add5786f956fc8" Jan 28 20:05:49 crc kubenswrapper[4985]: I0128 20:05:49.182052 4985 scope.go:117] "RemoveContainer" containerID="13c932ede5b3e566b7752d12093b1dd4c26483b9039f367f6e4ba1e8e603bf3f" Jan 28 20:05:49 crc kubenswrapper[4985]: I0128 20:05:49.272619 4985 scope.go:117] "RemoveContainer" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" Jan 28 20:05:49 crc kubenswrapper[4985]: E0128 20:05:49.334065 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da\": container with ID starting with 14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da not found: ID does not exist" containerID="14a134cc6d453f346b75c36ad477bc28fbbffdb8a4403d5d30532b761990a0da" Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.352681 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367"} Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.353010 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerStarted","Data":"0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629"} Jan 28 20:05:50 crc kubenswrapper[4985]: I0128 20:05:50.382408 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" podStartSLOduration=2.205996132 podStartE2EDuration="13.38238655s" podCreationTimestamp="2026-01-28 20:05:37 +0000 UTC" firstStartedPulling="2026-01-28 20:05:38.096108051 +0000 UTC m=+6748.922670882" lastFinishedPulling="2026-01-28 20:05:49.272498479 +0000 UTC m=+6760.099061300" observedRunningTime="2026-01-28 20:05:50.368766614 +0000 UTC m=+6761.195329435" watchObservedRunningTime="2026-01-28 20:05:50.38238655 +0000 UTC m=+6761.208949371" Jan 28 20:05:55 crc kubenswrapper[4985]: I0128 20:05:55.076691 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:55 crc kubenswrapper[4985]: I0128 20:05:55.131346 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.186671 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.188582 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.360751 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.360973 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.463472 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.463634 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.465303 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.483823 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"crc-debug-tsjq4\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:56 crc kubenswrapper[4985]: I0128 20:05:56.512900 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:05:57 crc kubenswrapper[4985]: I0128 20:05:57.471426 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerStarted","Data":"788e0621889e18f29167784cbe9d1a5ffba373376c1a278b0e926707a59d5ab2"} Jan 28 20:05:58 crc kubenswrapper[4985]: I0128 20:05:58.651348 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:05:58 crc kubenswrapper[4985]: I0128 20:05:58.651939 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spssk" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" containerID="cri-o://2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" gracePeriod=2 Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.547472 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-spssk_0762e6e7-b454-432f-91b7-b8cefccdc85e/registry-server/0.log" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560200 4985 generic.go:334] "Generic (PLEG): container finished" podID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerID="2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" exitCode=0 Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560241 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715"} Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.560297 4985 scope.go:117] "RemoveContainer" containerID="2557bb987631cc8664db3ca41a93039f004fa96ab105b36b4deb767b758e348c" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.801287 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881348 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881595 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.881755 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") pod \"0762e6e7-b454-432f-91b7-b8cefccdc85e\" (UID: \"0762e6e7-b454-432f-91b7-b8cefccdc85e\") " Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.884637 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities" (OuterVolumeSpecName: "utilities") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.919911 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb" (OuterVolumeSpecName: "kube-api-access-blvfb") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "kube-api-access-blvfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.984834 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-blvfb\" (UniqueName: \"kubernetes.io/projected/0762e6e7-b454-432f-91b7-b8cefccdc85e-kube-api-access-blvfb\") on node \"crc\" DevicePath \"\"" Jan 28 20:05:59 crc kubenswrapper[4985]: I0128 20:05:59.984965 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.039418 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0762e6e7-b454-432f-91b7-b8cefccdc85e" (UID: "0762e6e7-b454-432f-91b7-b8cefccdc85e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.087013 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0762e6e7-b454-432f-91b7-b8cefccdc85e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573195 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spssk" event={"ID":"0762e6e7-b454-432f-91b7-b8cefccdc85e","Type":"ContainerDied","Data":"28f0a59519c9b60c4ce3a2ff63447bff887c38b436a2ce97a8fb8d2c39a8e834"} Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573505 4985 scope.go:117] "RemoveContainer" containerID="2bd5b6a535cc49b2d36365b04fa8076e4297a92f613c32df8c333a3ba612f715" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.573712 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spssk" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.608852 4985 scope.go:117] "RemoveContainer" containerID="dda8ac60f550a2e96f02464275f0b11a82d9a3d53d2e2270e9d67c06ea4c3b44" Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.616058 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.627995 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spssk"] Jan 28 20:06:00 crc kubenswrapper[4985]: I0128 20:06:00.641845 4985 scope.go:117] "RemoveContainer" containerID="3c2283779a914e25036c37ef2827bd05492395f0fd0244baa58d85cf05f996a1" Jan 28 20:06:01 crc kubenswrapper[4985]: I0128 20:06:01.280556 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" path="/var/lib/kubelet/pods/0762e6e7-b454-432f-91b7-b8cefccdc85e/volumes" Jan 28 20:06:08 crc kubenswrapper[4985]: I0128 20:06:08.304595 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 2.442366604s: [/var/lib/containers/storage/overlay/2b74aa33c03668223a87dd3c1ff4a84a09224e18713c6538d4c947dab78be4d8/diff /var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log]; will not log again for this container unless duration exceeds 3s Jan 28 20:06:10 crc kubenswrapper[4985]: I0128 20:06:10.701841 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerStarted","Data":"6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad"} Jan 28 20:06:10 crc kubenswrapper[4985]: I0128 20:06:10.722137 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" podStartSLOduration=1.256446869 podStartE2EDuration="14.722120885s" podCreationTimestamp="2026-01-28 20:05:56 +0000 UTC" firstStartedPulling="2026-01-28 20:05:56.56651735 +0000 UTC m=+6767.393080171" lastFinishedPulling="2026-01-28 20:06:10.032191366 +0000 UTC m=+6780.858754187" observedRunningTime="2026-01-28 20:06:10.715986461 +0000 UTC m=+6781.542549282" watchObservedRunningTime="2026-01-28 20:06:10.722120885 +0000 UTC m=+6781.548683706" Jan 28 20:06:47 crc kubenswrapper[4985]: I0128 20:06:47.161429 4985 generic.go:334] "Generic (PLEG): container finished" podID="59d3bb7a-cda7-41ee-b0e1-9db6e930ffde" containerID="7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d" exitCode=0 Jan 28 20:06:47 crc kubenswrapper[4985]: I0128 20:06:47.161500 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerDied","Data":"7dd77068bf3eb2a91485c6b77d6e558f0ea9cb261db063d16cb699f2d789cd1d"} Jan 28 20:06:48 crc kubenswrapper[4985]: I0128 20:06:48.193153 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" event={"ID":"59d3bb7a-cda7-41ee-b0e1-9db6e930ffde","Type":"ContainerStarted","Data":"127164fb038939b87b998bbc470dbfa25a25034bad6586262e8b9900a8bf292f"} Jan 28 20:07:02 crc kubenswrapper[4985]: I0128 20:07:02.368311 4985 generic.go:334] "Generic (PLEG): container finished" podID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerID="6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad" exitCode=0 Jan 28 20:07:02 crc kubenswrapper[4985]: I0128 20:07:02.368423 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" event={"ID":"e4275dde-20a8-4f67-8ad6-3599ced73c5a","Type":"ContainerDied","Data":"6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad"} Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.514196 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.566183 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.576907 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-tsjq4"] Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624769 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") pod \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624898 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host" (OuterVolumeSpecName: "host") pod "e4275dde-20a8-4f67-8ad6-3599ced73c5a" (UID: "e4275dde-20a8-4f67-8ad6-3599ced73c5a"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.624987 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") pod \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\" (UID: \"e4275dde-20a8-4f67-8ad6-3599ced73c5a\") " Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.625730 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e4275dde-20a8-4f67-8ad6-3599ced73c5a-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.635056 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4" (OuterVolumeSpecName: "kube-api-access-7hjd4") pod "e4275dde-20a8-4f67-8ad6-3599ced73c5a" (UID: "e4275dde-20a8-4f67-8ad6-3599ced73c5a"). InnerVolumeSpecName "kube-api-access-7hjd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.728242 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hjd4\" (UniqueName: \"kubernetes.io/projected/e4275dde-20a8-4f67-8ad6-3599ced73c5a-kube-api-access-7hjd4\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.766852 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:03 crc kubenswrapper[4985]: I0128 20:07:03.766899 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.394330 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="788e0621889e18f29167784cbe9d1a5ffba373376c1a278b0e926707a59d5ab2" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.394660 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-tsjq4" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.769889 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770444 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770460 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770475 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770481 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770516 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770522 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770539 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-content" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770544 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-content" Jan 28 20:07:04 crc kubenswrapper[4985]: E0128 20:07:04.770560 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-utilities" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770566 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="extract-utilities" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770771 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.770809 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" containerName="container-00" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.771673 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.850675 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.850902 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952512 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952619 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.952940 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:04 crc kubenswrapper[4985]: I0128 20:07:04.971355 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"crc-debug-qpf2f\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.088775 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:05 crc kubenswrapper[4985]: W0128 20:07:05.157707 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b22e0bb_441d_4cda_8e55_82ad8593f13c.slice/crio-b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3 WatchSource:0}: Error finding container b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3: Status 404 returned error can't find the container with id b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3 Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.281014 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4275dde-20a8-4f67-8ad6-3599ced73c5a" path="/var/lib/kubelet/pods/e4275dde-20a8-4f67-8ad6-3599ced73c5a/volumes" Jan 28 20:07:05 crc kubenswrapper[4985]: I0128 20:07:05.413798 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" event={"ID":"6b22e0bb-441d-4cda-8e55-82ad8593f13c","Type":"ContainerStarted","Data":"b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3"} Jan 28 20:07:06 crc kubenswrapper[4985]: I0128 20:07:06.425940 4985 generic.go:334] "Generic (PLEG): container finished" podID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerID="ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c" exitCode=0 Jan 28 20:07:06 crc kubenswrapper[4985]: I0128 20:07:06.426037 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" event={"ID":"6b22e0bb-441d-4cda-8e55-82ad8593f13c","Type":"ContainerDied","Data":"ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c"} Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.582064 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.724973 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") pod \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.725356 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") pod \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\" (UID: \"6b22e0bb-441d-4cda-8e55-82ad8593f13c\") " Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.725469 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host" (OuterVolumeSpecName: "host") pod "6b22e0bb-441d-4cda-8e55-82ad8593f13c" (UID: "6b22e0bb-441d-4cda-8e55-82ad8593f13c"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.726027 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b22e0bb-441d-4cda-8e55-82ad8593f13c-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.732104 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh" (OuterVolumeSpecName: "kube-api-access-v6sjh") pod "6b22e0bb-441d-4cda-8e55-82ad8593f13c" (UID: "6b22e0bb-441d-4cda-8e55-82ad8593f13c"). InnerVolumeSpecName "kube-api-access-v6sjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:07 crc kubenswrapper[4985]: I0128 20:07:07.828239 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v6sjh\" (UniqueName: \"kubernetes.io/projected/6b22e0bb-441d-4cda-8e55-82ad8593f13c-kube-api-access-v6sjh\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.345897 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.356662 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-qpf2f"] Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.455377 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4e76df559edd283a4370762cdcd629371fe973ac4826e5a9899565f84b4b3e3" Jan 28 20:07:08 crc kubenswrapper[4985]: I0128 20:07:08.455440 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-qpf2f" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.278862 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" path="/var/lib/kubelet/pods/6b22e0bb-441d-4cda-8e55-82ad8593f13c/volumes" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.561533 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:09 crc kubenswrapper[4985]: E0128 20:07:09.562367 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562385 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562622 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="0762e6e7-b454-432f-91b7-b8cefccdc85e" containerName="registry-server" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.562653 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b22e0bb-441d-4cda-8e55-82ad8593f13c" containerName="container-00" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.563478 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.682716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.682840 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787196 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787358 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.787488 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.823654 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"crc-debug-zr5mq\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: I0128 20:07:09.896567 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:09 crc kubenswrapper[4985]: W0128 20:07:09.939262 4985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ef092c5_c571_4b51_bd8d_16f348128393.slice/crio-f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53 WatchSource:0}: Error finding container f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53: Status 404 returned error can't find the container with id f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53 Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481217 4985 generic.go:334] "Generic (PLEG): container finished" podID="6ef092c5-c571-4b51-bd8d-16f348128393" containerID="ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d" exitCode=0 Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481288 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" event={"ID":"6ef092c5-c571-4b51-bd8d-16f348128393","Type":"ContainerDied","Data":"ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d"} Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.481312 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" event={"ID":"6ef092c5-c571-4b51-bd8d-16f348128393","Type":"ContainerStarted","Data":"f3d7f1587c5736e90ae1ce34980089bf4a85618e1fb53002185047bfbae92c53"} Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.522240 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:10 crc kubenswrapper[4985]: I0128 20:07:10.532739 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/crc-debug-zr5mq"] Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.186590 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.187053 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.634652 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736493 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") pod \"6ef092c5-c571-4b51-bd8d-16f348128393\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736575 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") pod \"6ef092c5-c571-4b51-bd8d-16f348128393\" (UID: \"6ef092c5-c571-4b51-bd8d-16f348128393\") " Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.736940 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host" (OuterVolumeSpecName: "host") pod "6ef092c5-c571-4b51-bd8d-16f348128393" (UID: "6ef092c5-c571-4b51-bd8d-16f348128393"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.737695 4985 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6ef092c5-c571-4b51-bd8d-16f348128393-host\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.742809 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr" (OuterVolumeSpecName: "kube-api-access-qjhgr") pod "6ef092c5-c571-4b51-bd8d-16f348128393" (UID: "6ef092c5-c571-4b51-bd8d-16f348128393"). InnerVolumeSpecName "kube-api-access-qjhgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:07:11 crc kubenswrapper[4985]: I0128 20:07:11.839736 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjhgr\" (UniqueName: \"kubernetes.io/projected/6ef092c5-c571-4b51-bd8d-16f348128393-kube-api-access-qjhgr\") on node \"crc\" DevicePath \"\"" Jan 28 20:07:12 crc kubenswrapper[4985]: I0128 20:07:12.509017 4985 scope.go:117] "RemoveContainer" containerID="ee620cce9e13ced05e21107f3a230592d8cb95fd00ed4f37d416b23d67a3024d" Jan 28 20:07:12 crc kubenswrapper[4985]: I0128 20:07:12.509051 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/crc-debug-zr5mq" Jan 28 20:07:13 crc kubenswrapper[4985]: I0128 20:07:13.277010 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" path="/var/lib/kubelet/pods/6ef092c5-c571-4b51-bd8d-16f348128393/volumes" Jan 28 20:07:23 crc kubenswrapper[4985]: I0128 20:07:23.772572 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:23 crc kubenswrapper[4985]: I0128 20:07:23.776782 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6845d579bb-9lznf" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.211689 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-api/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.404009 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-listener/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.407119 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-evaluator/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.646835 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_aodh-0_9f75cd8d-6a02-43e4-8e58-92f8d024311b/aodh-notifier/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.791481 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668ffb7f9d-shvfm_04b28283-6f65-478e-952d-f965423f413e/barbican-api-log/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.813279 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-668ffb7f9d-shvfm_04b28283-6f65-478e-952d-f965423f413e/barbican-api/0.log" Jan 28 20:07:36 crc kubenswrapper[4985]: I0128 20:07:36.937150 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cc6bcfccd-rh55k_f4b18150-cbd6-4c6f-a28b-8c66b1e875f2/barbican-keystone-listener/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.090132 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-6cc6bcfccd-rh55k_f4b18150-cbd6-4c6f-a28b-8c66b1e875f2/barbican-keystone-listener-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.178778 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c84c9469f-9xntt_d885ddad-ecc9-4b73-ad9e-9da819f95107/barbican-worker/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.214646 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-6c84c9469f-9xntt_d885ddad-ecc9-4b73-ad9e-9da819f95107/barbican-worker-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.378073 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nmknx_3865f1db-f707-4b28-bbf2-8ce1975baa1f/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.430918 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-central-agent/1.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.621179 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-central-agent/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.632181 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/ceilometer-notification-agent/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.659871 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/proxy-httpd/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.676616 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_b29b2a3b-ca12-4e1c-8816-0d28cebe2dde/sg-core/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.873242 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_841350c5-b9e8-4331-9282-e129f8152153/cinder-api-log/0.log" Jan 28 20:07:37 crc kubenswrapper[4985]: I0128 20:07:37.924029 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_841350c5-b9e8-4331-9282-e129f8152153/cinder-api/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.122456 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/cinder-scheduler/1.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.183965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/cinder-scheduler/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.206971 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_07cf4e1d-9eb6-491a-90a5-dc30af589bc0/probe/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.366636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-lbrsn_ed5a5127-7214-4f45-bda0-a1c6ecbaaede/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.471109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-l2fvc_89fa72dd-7320-41fe-8df4-161d84d41b84/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.593749 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/init/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.754743 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/init/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.803048 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-42d8l_fbfc48e7-8a35-4fc6-b9fd-0c1735864116/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:38 crc kubenswrapper[4985]: I0128 20:07:38.855535 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-5d75f767dc-jqtwd_63ee6cb7-f768-47d8-a266-e1e6ca6926ea/dnsmasq-dns/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.061814 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff4e22d-1c99-4c30-9eaa-3225c1e868c7/glance-httpd/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.095577 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_9ff4e22d-1c99-4c30-9eaa-3225c1e868c7/glance-log/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.222015 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7b0993c-0b43-44d7-8498-6808f2a1439e/glance-httpd/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.297523 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_d7b0993c-0b43-44d7-8498-6808f2a1439e/glance-log/0.log" Jan 28 20:07:39 crc kubenswrapper[4985]: I0128 20:07:39.915836 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-engine-5df4f6c8f9-fvvqb_45d84233-dc44-4b3c-8aaa-f08ab50c0512/heat-engine/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.121055 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-zm5xl_50ce12a8-7d79-4fa2-a879-e3082ba41427/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.267874 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-api-9d696c4dd-qgm9g_f91275ab-50ad-4d69-953f-764ccd276927/heat-api/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.310139 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_heat-cfnapi-76b7548687-cmjrr_c761ae73-94d1-46be-afe6-1232e2c589ff/heat-cfnapi/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.363307 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-25775_3baf8df5-1989-4678-8268-058f46511cfd/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.623759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493781-6kphz_7635ee1a-7676-44ad-af7f-ebfab7b56933/keystone-cron/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.831470 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29493841-rkhj6_c901d430-df5f-4afa-8a40-9ed18d2ad552/keystone-cron/0.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.871548 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e/kube-state-metrics/1.log" Jan 28 20:07:40 crc kubenswrapper[4985]: I0128 20:07:40.979876 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-77c7879f98-bcrvp_d86022f2-8cd4-43fd-ba5d-0729c8d0fd4b/keystone-api/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.016870 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_1e6eb1bd-1379-4be2-bcb0-6d7a37e93e9e/kube-state-metrics/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.097019 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-swns9_05f3f537-0392-45c7-af0d-36294670ed29/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.166300 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_logging-edpm-deployment-openstack-edpm-ipam-wn6r7_c6c90c6c-aa78-4215-9c43-acd22891abfb/logging-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.186151 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.186219 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.397780 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_mysqld-exporter-0_6b1f6dd4-6d66-4f40-879f-5f0af3845842/mysqld-exporter/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.579507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f49f9645f-bs9wr_2177b5b3-0121-4ff8-93dd-2f9ef36560f4/neutron-api/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.593024 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_88fe31db-8414-43ac-b547-fa0278d9508f/memcached/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.666847 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-f49f9645f-bs9wr_2177b5b3-0121-4ff8-93dd-2f9ef36560f4/neutron-httpd/0.log" Jan 28 20:07:41 crc kubenswrapper[4985]: I0128 20:07:41.712109 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-pcbhr_85887caf-94f1-4f74-820c-edba2628a8e6/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.128291 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_78b595e2-b61a-4921-8d69-28adfa53f6bb/nova-cell0-conductor-conductor/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.261181 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_11eaf6b3-7169-4587-af33-68f04428e630/nova-api-log/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.321564 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_bbb020dd-95f1-4d78-9899-9fd0eca60584/nova-cell1-conductor-conductor/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.485614 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_4e0bd087-7446-45b4-858b-7b514713d4fe/nova-cell1-novncproxy-novncproxy/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.597166 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-68wk4_b129af39-361b-4dba-bdbb-31531c3a2ce9/nova-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.663090 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_11eaf6b3-7169-4587-af33-68f04428e630/nova-api-api/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.728547 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d99eaa1-3945-4192-9d61-7668d944bc63/nova-metadata-log/0.log" Jan 28 20:07:42 crc kubenswrapper[4985]: I0128 20:07:42.925775 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.029330 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_bdade9ba-ba1b-4093-bc40-73f68c84615f/nova-scheduler-scheduler/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.267433 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.274051 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/galera/1.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.303007 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_b8253e52-6b52-45a9-b5d6-680d3dfbebe7/galera/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.467440 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.769809 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.780948 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/mysql-bootstrap/0.log" Jan 28 20:07:43 crc kubenswrapper[4985]: I0128 20:07:43.840574 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_43d9d2ff-f746-4d1f-8ed7-d49f5afc23b8/galera/1.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.008966 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_1d8f391e-0ed3-4969-b61b-5b9d602644fa/openstackclient/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.105813 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-9r84t_2d1c1ab5-7e43-47cd-8218-3d945574a79c/ovn-controller/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.453169 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-vsdt5_d67712df-b1fe-463f-9a6c-c0591aa6cec2/openstack-network-exporter/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.461601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server-init/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.588829 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_7d99eaa1-3945-4192-9d61-7668d944bc63/nova-metadata-metadata/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.724164 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovs-vswitchd/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.751714 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server-init/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.762468 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-f287q_2c181f14-26b7-49f4-9ae0-869d9b291938/ovsdb-server/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.824148 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-h47tw_7b281922-4bb4-45f8-b633-d82925f4814e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.949453 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_76a14385-7b25-48b8-8614-1a77892a1119/openstack-network-exporter/0.log" Jan 28 20:07:44 crc kubenswrapper[4985]: I0128 20:07:44.979365 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_76a14385-7b25-48b8-8614-1a77892a1119/ovn-northd/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.041601 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_76ff3fb3-d9e1-41dc-a644-8ac29cb97d11/openstack-network-exporter/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.134396 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_76ff3fb3-d9e1-41dc-a644-8ac29cb97d11/ovsdbserver-nb/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.183559 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e1c7625-25e1-442f-9f71-5d2a9323306c/openstack-network-exporter/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.204112 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_6e1c7625-25e1-442f-9f71-5d2a9323306c/ovsdbserver-sb/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.373658 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-848676699d-9lbcr_cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1/placement-api/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.448592 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/init-config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.509618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-848676699d-9lbcr_cf096408-8ee5-4ac7-a0ec-6fd5675c9ff1/placement-log/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.653331 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.654029 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/prometheus/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.660064 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/init-config-reloader/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.692920 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_3d356801-0ed0-4343-87a9-29d23453d621/thanos-sidecar/0.log" Jan 28 20:07:45 crc kubenswrapper[4985]: I0128 20:07:45.829354 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.031531 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.073656 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_34d82dad-dc98-4c0f-90c2-0b25f7d73c01/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.098360 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.374953 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.378104 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.433582 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_dcfa7b0e-a239-4d9f-bdfa-1cf4610aa5fe/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.647932 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.712095 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/setup-container/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.729602 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-1_ae555e00-c2df-4fce-af07-a91133f8767d/rabbitmq/0.log" Jan 28 20:07:46 crc kubenswrapper[4985]: I0128 20:07:46.890077 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/setup-container/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.063523 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-2_249a0e05-d210-402f-b7f8-2caf153346d8/rabbitmq/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.084151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-pzqnb_b7a40a7e-4812-4308-87c7-1b3fb2d2bbe1/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.164993 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xgv8j_3b94af3f-603c-4a3e-966e-7a4bfbc78178/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.267627 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-fcmvk_7a5d3484-2192-44a6-b632-5a683af945d6/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.402337 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-8kf5l_748912b6-cdb7-40bc-875e-563d7913a6dd/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.513808 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-pbrcd_99c460d4-80df-4aac-9fc5-20198855b361/ssh-known-hosts-edpm-deployment/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.672076 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-server/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.749220 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-httpd/0.log" Jan 28 20:07:47 crc kubenswrapper[4985]: I0128 20:07:47.991916 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-l4q82_75109476-5e36-45b8-afb9-1e7f3a9331f9/swift-ring-rebalance/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.134636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.168694 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-reaper/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.213039 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.239419 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/account-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.342330 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.424682 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.431561 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-updater/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.437759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/container-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.530289 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-auditor/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.556916 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-expirer/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.601472 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-server/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.616718 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-updater/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.632907 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/object-replicator/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.715698 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/rsync/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.770298 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_4b55b35c-0ef1-4db8-b435-24de7fda8ecc/swift-recon-cron/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.848943 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-lhknq_557f8a1e-1a37-47a3-aa41-7222181ea137/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:48 crc kubenswrapper[4985]: I0128 20:07:48.965602 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-power-monitoring-edpm-deployment-openstack-edpm-gnqls_d9d4a4e3-9f29-45a2-9748-d133f122af06/telemetry-power-monitoring-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.156411 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_e5d86a77-6a87-4434-b571-f453639eb3a2/test-operator-logs-container/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.413507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-5h28l_ae55970b-52a8-4bd7-8d82-853e9cd4ad32/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Jan 28 20:07:49 crc kubenswrapper[4985]: I0128 20:07:49.436070 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a808dc72-a951-4f07-a612-2fde39a49a30/tempest-tests-tempest-tests-runner/0.log" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.185562 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.186168 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.186217 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.215664 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 20:08:11 crc kubenswrapper[4985]: I0128 20:08:11.215778 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" gracePeriod=600 Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.213828 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" exitCode=0 Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.213927 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9"} Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.214510 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} Jan 28 20:08:12 crc kubenswrapper[4985]: I0128 20:08:12.214545 4985 scope.go:117] "RemoveContainer" containerID="81dad89a62b889bed312ab77391ca3ec745fe60483f6f6c989acf44b195842c8" Jan 28 20:08:14 crc kubenswrapper[4985]: I0128 20:08:14.801082 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.080176 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.101662 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.147509 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.259766 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/util/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.286791 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/pull/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.328012 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_07a26f13d6ea06f09af2779dfaeec09a555dcc6fa675d4158646a21f19jz4sg_b5e9d40d-8ad9-4602-ac23-7cad303b1696/extract/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.554701 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7f86f8796f-ww4nj_4fa1b302-aad3-4e6e-9cd2-bba65262c1e8/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.573320 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-7478f7dbf9-7gfrh_7ef21481-ade5-436a-ae3a-f284a7e438d3/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.686350 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-b45d7bf98-75d84_4dfb4621-d061-4224-8aee-840726565aa3/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.875000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-78fdd796fd-6bdmh_99893bb5-33ef-4159-bf8f-1c79a58e74d9/manager/0.log" Jan 28 20:08:15 crc kubenswrapper[4985]: I0128 20:08:15.887319 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.066265 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6skp6_99b88683-3e0a-4afa-91ab-71feac27fba1/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.081987 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.129703 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-77d5c5b54f-6skp6_99b88683-3e0a-4afa-91ab-71feac27fba1/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.318606 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-5zqpj_697da6ae-2950-468c-82e9-bcb1a1af61e7/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.495675 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-598f7747c9-s2n6z_75e682e9-e5a5-47f1-83cc-c8004ebe224a/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.636992 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-hktv5_b5a0c28d-1434-40f0-8759-d76b65dc2c30/manager/1.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.639397 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-694cf4f878-5zqpj_697da6ae-2950-468c-82e9-bcb1a1af61e7/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.786946 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-b8b6d4659-hktv5_b5a0c28d-1434-40f0-8759-d76b65dc2c30/manager/0.log" Jan 28 20:08:16 crc kubenswrapper[4985]: I0128 20:08:16.890628 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-78c6999f6f-9lm5f_654a2c56-81a7-4b32-ad1d-c4d60b054b47/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.008088 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-6b9fb5fdcb-rbn84_9897766d-6497-4d0e-bd9a-ef8e31a08e24/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.211194 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-78d58447c5-dlssr_873dc5cd-5c8e-417e-b99a-a52dfcfd701b/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.254993 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.407114 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.409487 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4smn2_367b6525-0367-437a-9fe3-b2007411f4af/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.500884 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5f4cd88d46-4smn2_367b6525-0367-437a-9fe3-b2007411f4af/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.589319 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz_70329607-4bbe-43ad-bb7a-2b62f26af473/manager/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.662178 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-6b68b8b8549w4rz_70329607-4bbe-43ad-bb7a-2b62f26af473/manager/0.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.804297 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-687c66fd56-xdvhx_82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62/operator/1.log" Jan 28 20:08:17 crc kubenswrapper[4985]: I0128 20:08:17.957168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-687c66fd56-xdvhx_82e231f4-e3b4-4c6e-a0c1-9cd94c47cc62/operator/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.164852 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wnjfp_3314cb32-9bb8-46fd-b28e-5a6e9b779fa7/registry-server/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.285759 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-wnjfp_3314cb32-9bb8-46fd-b28e-5a6e9b779fa7/registry-server/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.532366 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-v5mmf_50682373-a3d7-491e-84a0-1d5613ee2e8a/manager/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.563507 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-6f75f45d54-v5mmf_50682373-a3d7-491e-84a0-1d5613ee2e8a/manager/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.736086 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qn5x9_91971c24-6187-432c-84ba-65dba69b4598/manager/1.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.760240 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-79d5ccc684-qn5x9_91971c24-6187-432c-84ba-65dba69b4598/manager/0.log" Jan 28 20:08:18 crc kubenswrapper[4985]: I0128 20:08:18.930741 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7s7s2_38846228-cec9-4a59-b9bb-c766121dacde/operator/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.117919 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-7s7s2_38846228-cec9-4a59-b9bb-c766121dacde/operator/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.146115 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-547cbdb99f-9kbdr_c95374e8-7d41-4a49-add9-7f28196d70eb/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.345793 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-68b9ccc946-rk65w_c1e8524e-e047-4872-9ee1-ae4e013f8825/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.378223 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74c974475f-b9j67_359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.592454 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xwzkh_1310770f-7cb7-4874-b2a0-4ef733911716/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.645522 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-69797bbcbd-xwzkh_1310770f-7cb7-4874-b2a0-4ef733911716/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.675641 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-xzkhh_d4d6e990-839d-4186-9382-1a67922556df/manager/1.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.708543 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-74c974475f-b9j67_359fd3be-e8b7-4f51-bb1d-a5d8bdc228c3/manager/0.log" Jan 28 20:08:19 crc kubenswrapper[4985]: I0128 20:08:19.787589 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-xzkhh_d4d6e990-839d-4186-9382-1a67922556df/manager/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.021063 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-wp27s_7f89cfdf-2a4d-4582-94f4-e53c45c3e09c/control-plane-machine-set-operator/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.205520 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hjjf7_218b57d8-c3a3-4a33-a3ef-6701cf557911/kube-rbac-proxy/0.log" Jan 28 20:08:40 crc kubenswrapper[4985]: I0128 20:08:40.262511 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hjjf7_218b57d8-c3a3-4a33-a3ef-6701cf557911/machine-api-operator/0.log" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.546311 4985 scope.go:117] "RemoveContainer" containerID="eaa8b31fd567cbe5402dee337791c77b7d17c2a64b306b5f934b501e7555c359" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.596348 4985 scope.go:117] "RemoveContainer" containerID="5651818473f4b98cbff41942fcaaaa5a4dff77b8a26838075287437237018599" Jan 28 20:08:49 crc kubenswrapper[4985]: I0128 20:08:49.633563 4985 scope.go:117] "RemoveContainer" containerID="6aae3f87a8a75e8de0eb7f2174fb7e1ad791b3b13463186c8a127596ad993426" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.633901 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-dzhtm_4f9db9b6-ec43-4789-9efd-f2d4831c67e8/cert-manager-controller/0.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.800799 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-bcvwj_aa962965-4b70-40f4-8400-b7ff2ec182e9/cert-manager-cainjector/0.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.868289 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mwrk6_26777afd-4d9f-4ebb-b8ed-0be018fa5a17/cert-manager-webhook/1.log" Jan 28 20:08:55 crc kubenswrapper[4985]: I0128 20:08:55.909434 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-mwrk6_26777afd-4d9f-4ebb-b8ed-0be018fa5a17/cert-manager-webhook/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.151953 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-slwkn_b866e710-8894-47da-9251-4118fec613bd/nmstate-console-plugin/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.347894 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-gkjzc_8f0319d2-9602-42b4-a3fb-c53bf5d3c244/nmstate-handler/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.387820 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vznlg_05eeb2e4-510c-4b66-addf-efaddce8cfb0/kube-rbac-proxy/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.408701 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-vznlg_05eeb2e4-510c-4b66-addf-efaddce8cfb0/nmstate-metrics/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.561058 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-ztr6n_e130755a-0d4d-4efd-a08a-a3bda72ff4cf/nmstate-operator/0.log" Jan 28 20:09:11 crc kubenswrapper[4985]: I0128 20:09:11.626045 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-jrf9w_645ec0ef-97a6-4e2f-b691-ffcbcab4eed7/nmstate-webhook/0.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.458189 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/kube-rbac-proxy/0.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.539985 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/1.log" Jan 28 20:09:26 crc kubenswrapper[4985]: I0128 20:09:26.651690 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.501054 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-s9875_74fbf9d6-ccb4-4d90-9db8-2d4613334d81/prometheus-operator/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.727965 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_23ef5df5-bfbe-4465-8e87-d69896bf70aa/prometheus-operator-admission-webhook/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.833733 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_e192375e-5db5-46e4-922b-21b8bc5698ba/prometheus-operator-admission-webhook/0.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.949963 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/1.log" Jan 28 20:09:40 crc kubenswrapper[4985]: I0128 20:09:40.974439 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/0.log" Jan 28 20:09:41 crc kubenswrapper[4985]: I0128 20:09:41.085161 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-5w5dn_c9b84394-02f1-4bde-befe-a2a649925c93/observability-ui-dashboards/0.log" Jan 28 20:09:41 crc kubenswrapper[4985]: I0128 20:09:41.217813 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-j7z4h_971845b8-805d-4b4a-a8fd-14f263f17695/perses-operator/0.log" Jan 28 20:09:45 crc kubenswrapper[4985]: I0128 20:09:45.620219 4985 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-5bdcb887dc-rxkm6" podUID="12d4e4cf-9153-4a32-9155-f9d13a248a26" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.314307 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_cluster-logging-operator-79cf69ddc8-d28w5_4db97b28-803f-4b66-9322-f210440517ff/cluster-logging-operator/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.465210 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_collector-gthjs_be7250ed-2e5a-403a-abfa-f1855e86ae44/collector/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.514671 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-compactor-0_ac72f54d-936d-4c98-9f91-918f7a05b5d1/loki-compactor/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.675885 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-distributor-5f678c8dd6-2755m_effc2fb2-2eb7-4ea0-abf1-0d43bde4adeb/loki-distributor/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.789912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-c6d96_02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b/gateway/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.850594 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-c6d96_02e0988e-bb4d-4c63-a4aa-3f1432a1ee7b/opa/0.log" Jan 28 20:09:59 crc kubenswrapper[4985]: I0128 20:09:59.982000 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-g5tqr_ae6864ac-d6e2-4d85-aa84-361f51b944eb/gateway/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.091654 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-gateway-76696895d9-g5tqr_ae6864ac-d6e2-4d85-aa84-361f51b944eb/opa/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.108785 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-index-gateway-0_664a7afe-25ae-45f8-81bd-9a9c59c431cd/loki-index-gateway/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.358209 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-querier-76788598db-dkn9m_21e48b83-3e43-4ba7-8d53-adeeb9e7e3d7/loki-querier/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.359168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-ingester-0_e322915e-933c-4de4-98dd-ef047ee5b056/loki-ingester/0.log" Jan 28 20:10:00 crc kubenswrapper[4985]: I0128 20:10:00.540869 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-logging_logging-loki-query-frontend-69d9546745-pcd6x_5c56d4fe-62c7-47ef-9a0f-607d899d19b8/loki-query-frontend/0.log" Jan 28 20:10:11 crc kubenswrapper[4985]: I0128 20:10:11.186428 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:10:11 crc kubenswrapper[4985]: I0128 20:10:11.187003 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:10:16 crc kubenswrapper[4985]: I0128 20:10:16.892465 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/controller/1.log" Jan 28 20:10:16 crc kubenswrapper[4985]: I0128 20:10:16.998506 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/kube-rbac-proxy/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.003084 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-8f79k_5fd77adb-e801-4d3f-ac61-64615952aebd/controller/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.173539 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.348818 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.377326 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.377636 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.432845 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.589269 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.638511 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.668045 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.676907 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.829747 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-reloader/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.864618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/controller/1.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.868388 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-frr-files/0.log" Jan 28 20:10:17 crc kubenswrapper[4985]: I0128 20:10:17.875988 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/cp-metrics/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.029758 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/controller/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.117952 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr/1.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.152577 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr-metrics/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.297106 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/kube-rbac-proxy/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.618549 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/kube-rbac-proxy-frr/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.722944 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/reloader/0.log" Jan 28 20:10:18 crc kubenswrapper[4985]: I0128 20:10:18.843396 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-szgpw_f6ebe169-8b20-4d94-99b7-96afffcb5118/frr-k8s-webhook-server/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.037477 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-szgpw_f6ebe169-8b20-4d94-99b7-96afffcb5118/frr-k8s-webhook-server/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.087596 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74b956d56f-86jl5_c77a825c-f720-48a7-b74f-49b16e3ecbed/manager/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.374331 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7b78bd4-c2clz_57ef54a5-9891-4f69-9907-b726d30d4006/webhook-server/1.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.406483 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-74b956d56f-86jl5_c77a825c-f720-48a7-b74f-49b16e3ecbed/manager/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.619628 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-qlsnv_66ed71ac-c9a1-4130-bb76-eb5fc111f72a/frr/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.630440 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-fd7b78bd4-c2clz_57ef54a5-9891-4f69-9907-b726d30d4006/webhook-server/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.703823 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/kube-rbac-proxy/0.log" Jan 28 20:10:19 crc kubenswrapper[4985]: I0128 20:10:19.992686 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/speaker/1.log" Jan 28 20:10:20 crc kubenswrapper[4985]: I0128 20:10:20.285912 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-6lq6d_b5094b56-07e5-45db-8a13-ce7b931b861e/speaker/0.log" Jan 28 20:10:34 crc kubenswrapper[4985]: I0128 20:10:34.793765 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.064730 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.073040 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.113379 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.225615 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.296241 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/extract/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.300996 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_19f7b28a9b43ae652fc2e0b84ee4ec326dbd0a997d417d0c402b7249a2nqt95_b691bd15-43f8-4823-917b-7c27b8ca4ba6/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.438132 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.687922 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.735640 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.740928 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.878841 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/pull/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.911519 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/util/0.log" Jan 28 20:10:35 crc kubenswrapper[4985]: I0128 20:10:35.949019 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcvlltw_9ec863bb-8b63-4362-9bc6-93c91eebec21/extract/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.064593 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.316151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.351290 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.354676 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.533830 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/util/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.560392 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/pull/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.606618 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_40d905839fa7263f1f473fab6e11a9af2a700db4f753f3af512410360bv7qds_a2f76b8f-1fff-44e6-931b-d35852c1ab04/extract/0.log" Jan 28 20:10:36 crc kubenswrapper[4985]: I0128 20:10:36.718078 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.007371 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.019685 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.042832 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.246692 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/extract/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.266401 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.271607 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec713xfg6h_096a6287-784c-410e-99c8-16188796d2ea/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.465730 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.663942 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.675791 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.704168 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.910269 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/pull/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.914506 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/util/0.log" Jan 28 20:10:37 crc kubenswrapper[4985]: I0128 20:10:37.918746 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nkthg_c3ffee15-9ee0-496b-920f-87dd09fd08ec/extract/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.125764 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.330201 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.338945 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.339613 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.834579 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-content/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.848677 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.854956 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/extract-utilities/0.log" Jan 28 20:10:38 crc kubenswrapper[4985]: I0128 20:10:38.958610 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-v2zt6_bad9c3c9-3333-4c1b-a020-2322b7baae36/registry-server/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.129894 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.136037 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.140581 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.296367 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.309849 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.410842 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/registry-server/1.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.607447 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvkcw_4845499d-139f-4839-9f9f-4d77c7f0ae37/marketplace-operator/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.625190 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-hvkcw_4845499d-139f-4839-9f9f-4d77c7f0ae37/marketplace-operator/1.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.724435 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.939017 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.962872 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:39 crc kubenswrapper[4985]: I0128 20:10:39.968133 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.252599 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.260151 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.362867 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/registry-server/1.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.543480 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-z2xq5_d59677ee-1cc3-4635-a126-0383e56d3fc0/registry-server/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.585748 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4fx27_478fc51e-7963-4ba3-a5ec-c2b7045b8353/registry-server/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.664436 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.838350 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.864389 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:40 crc kubenswrapper[4985]: I0128 20:10:40.880377 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.066461 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-utilities/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.084952 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/extract-content/0.log" Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.185624 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:10:41 crc kubenswrapper[4985]: I0128 20:10:41.185682 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:10:42 crc kubenswrapper[4985]: I0128 20:10:42.181847 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-5whpv_5cad9e98-172d-4053-83a3-ebee724a6d9c/registry-server/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.151296 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-j28rb_23ef5df5-bfbe-4465-8e87-d69896bf70aa/prometheus-operator-admission-webhook/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.168853 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-s9875_74fbf9d6-ccb4-4d90-9db8-2d4613334d81/prometheus-operator/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.204431 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-6dc9b9664c-kcn7n_e192375e-5db5-46e4-922b-21b8bc5698ba/prometheus-operator-admission-webhook/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.409323 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/1.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.425106 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-nfhqj_a23ac89d-75e4-4511-afaa-ef9d6205a672/operator/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.461928 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-ui-dashboards-66cbf594b5-5w5dn_c9b84394-02f1-4bde-befe-a2a649925c93/observability-ui-dashboards/0.log" Jan 28 20:10:55 crc kubenswrapper[4985]: I0128 20:10:55.540499 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-j7z4h_971845b8-805d-4b4a-a8fd-14f263f17695/perses-operator/0.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.089177 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/kube-rbac-proxy/0.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.223138 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/1.log" Jan 28 20:11:09 crc kubenswrapper[4985]: I0128 20:11:09.225632 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-85fc96dbd6-9qljj_fc080bc5-4b4f-4405-b458-7450aaf8714b/manager/0.log" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186374 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186946 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.186992 4985 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.187987 4985 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.188066 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" containerID="cri-o://bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" gracePeriod=600 Jan 28 20:11:11 crc kubenswrapper[4985]: E0128 20:11:11.308653 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.544921 4985 generic.go:334] "Generic (PLEG): container finished" podID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" exitCode=0 Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.544972 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerDied","Data":"bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a"} Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.545010 4985 scope.go:117] "RemoveContainer" containerID="feb11cf010e066de1428423731282f1a1bf65ec6e9b804a07c16b386b1f6b3a9" Jan 28 20:11:11 crc kubenswrapper[4985]: I0128 20:11:11.545746 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:11 crc kubenswrapper[4985]: E0128 20:11:11.546094 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:25 crc kubenswrapper[4985]: I0128 20:11:25.263948 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:25 crc kubenswrapper[4985]: E0128 20:11:25.264875 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.012321 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.670921524s: [/var/lib/containers/storage/overlay/2f12c37c8eb1e2c5e02f58419690d5a8b196e336584f7ad4540ca4dbdf5fe0b9/diff /var/log/pods/openstack-operators_nova-operator-controller-manager-7bdb645866-7mtzf_9c7284ab-b40f-4275-b85e-77aebd660135/manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.013034 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.300428756s: [/var/lib/containers/storage/overlay/b7e64f0091f970033e5ed5c0641d5b64ec853c9c21c50a8609f6bef14f51773c/diff /var/log/pods/openstack-operators_heat-operator-controller-manager-594c8c9d5d-fm7nr_cc7f29e1-e6e0-45a0-920a-4b18d8204c65/manager/1.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.014413 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.299472899s: [/var/lib/containers/storage/overlay/1e93255d8360cc04907058d529d9a0ce9a7d586b97b0f6d04d1301099232bc13/diff /var/log/pods/openstack_heat-engine-5df4f6c8f9-fvvqb_45d84233-dc44-4b3c-8aaa-f08ab50c0512/heat-engine/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:35 crc kubenswrapper[4985]: I0128 20:11:35.016220 4985 fsHandler.go:133] fs: disk usage and inodes count on following dirs took 1.22469184s: [/var/lib/containers/storage/overlay/256b396208fda6cd62f0180af4b905a209625c70c4b22876c86c69eaf719a8d8/diff /var/log/pods/openstack_swift-proxy-5bdcb887dc-rxkm6_12d4e4cf-9153-4a32-9155-f9d13a248a26/proxy-server/0.log]; will not log again for this container unless duration exceeds 2s Jan 28 20:11:36 crc kubenswrapper[4985]: I0128 20:11:36.264170 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:36 crc kubenswrapper[4985]: E0128 20:11:36.264724 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:48 crc kubenswrapper[4985]: I0128 20:11:48.264128 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:48 crc kubenswrapper[4985]: E0128 20:11:48.265272 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:11:59 crc kubenswrapper[4985]: I0128 20:11:59.264432 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:11:59 crc kubenswrapper[4985]: E0128 20:11:59.265921 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:11 crc kubenswrapper[4985]: I0128 20:12:11.281308 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:11 crc kubenswrapper[4985]: E0128 20:12:11.282625 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:25 crc kubenswrapper[4985]: I0128 20:12:25.265045 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:25 crc kubenswrapper[4985]: E0128 20:12:25.266785 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:37 crc kubenswrapper[4985]: I0128 20:12:37.263967 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:37 crc kubenswrapper[4985]: E0128 20:12:37.264919 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:49 crc kubenswrapper[4985]: I0128 20:12:49.264285 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:12:49 crc kubenswrapper[4985]: E0128 20:12:49.264912 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:12:49 crc kubenswrapper[4985]: I0128 20:12:49.830293 4985 scope.go:117] "RemoveContainer" containerID="6f9e46511089ed1317a6f65cf916f19a8e3ebe9ec1c94201d055df23d13e16ad" Jan 28 20:13:02 crc kubenswrapper[4985]: I0128 20:13:02.264541 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:02 crc kubenswrapper[4985]: E0128 20:13:02.265363 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.131035 4985 generic.go:334] "Generic (PLEG): container finished" podID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" exitCode=0 Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.131600 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" event={"ID":"b1ab1977-13f1-41b6-9edd-cbb936fb8485","Type":"ContainerDied","Data":"0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629"} Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.132498 4985 scope.go:117] "RemoveContainer" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" Jan 28 20:13:10 crc kubenswrapper[4985]: I0128 20:13:10.568868 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/gather/0.log" Jan 28 20:13:15 crc kubenswrapper[4985]: I0128 20:13:15.264666 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:15 crc kubenswrapper[4985]: E0128 20:13:15.265390 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.672664 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.673455 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" containerID="cri-o://5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" gracePeriod=2 Jan 28 20:13:18 crc kubenswrapper[4985]: I0128 20:13:18.686994 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-sg6vz/must-gather-9vwtc"] Jan 28 20:13:19 crc kubenswrapper[4985]: I0128 20:13:19.271588 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:19 crc kubenswrapper[4985]: I0128 20:13:19.272792 4985 generic.go:334] "Generic (PLEG): container finished" podID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerID="5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" exitCode=143 Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.101152 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.101861 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.256358 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") pod \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.256736 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") pod \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\" (UID: \"b1ab1977-13f1-41b6-9edd-cbb936fb8485\") " Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.289712 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6" (OuterVolumeSpecName: "kube-api-access-j7qn6") pod "b1ab1977-13f1-41b6-9edd-cbb936fb8485" (UID: "b1ab1977-13f1-41b6-9edd-cbb936fb8485"). InnerVolumeSpecName "kube-api-access-j7qn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.295555 4985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-sg6vz_must-gather-9vwtc_b1ab1977-13f1-41b6-9edd-cbb936fb8485/copy/0.log" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.303635 4985 scope.go:117] "RemoveContainer" containerID="5355598335d0d9dff197dc4d09b9b325ee69e3336b9f5be9371d1aa865456367" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.303892 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-sg6vz/must-gather-9vwtc" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.369167 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7qn6\" (UniqueName: \"kubernetes.io/projected/b1ab1977-13f1-41b6-9edd-cbb936fb8485-kube-api-access-j7qn6\") on node \"crc\" DevicePath \"\"" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.409788 4985 scope.go:117] "RemoveContainer" containerID="0f940a9e21cc7bcb3783698fe185a88cc577a4e11e2a41301793da71c8090629" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.565039 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "b1ab1977-13f1-41b6-9edd-cbb936fb8485" (UID: "b1ab1977-13f1-41b6-9edd-cbb936fb8485"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:13:20 crc kubenswrapper[4985]: I0128 20:13:20.577094 4985 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/b1ab1977-13f1-41b6-9edd-cbb936fb8485-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 20:13:21 crc kubenswrapper[4985]: I0128 20:13:21.278870 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" path="/var/lib/kubelet/pods/b1ab1977-13f1-41b6-9edd-cbb936fb8485/volumes" Jan 28 20:13:30 crc kubenswrapper[4985]: I0128 20:13:30.264320 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:30 crc kubenswrapper[4985]: E0128 20:13:30.265057 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:43 crc kubenswrapper[4985]: I0128 20:13:43.264614 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:43 crc kubenswrapper[4985]: E0128 20:13:43.266131 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:49 crc kubenswrapper[4985]: I0128 20:13:49.938959 4985 scope.go:117] "RemoveContainer" containerID="ae043829729a5304a684bda1750cb3b2c47fa611ecf13670e0e552bc36940e3c" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909176 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.909915 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909948 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.909986 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.909994 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: E0128 20:13:50.910018 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910025 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910236 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="copy" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910278 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ef092c5-c571-4b51-bd8d-16f348128393" containerName="container-00" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.910298 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1ab1977-13f1-41b6-9edd-cbb936fb8485" containerName="gather" Jan 28 20:13:50 crc kubenswrapper[4985]: I0128 20:13:50.914991 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.074875 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.074953 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.075449 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.099834 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177786 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177902 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.177948 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.178832 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.178836 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.207378 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"redhat-marketplace-jd6sm\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.247363 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:13:51 crc kubenswrapper[4985]: I0128 20:13:51.967926 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752377 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" exitCode=0 Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752434 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00"} Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.752664 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"f8915d028979414d1d3011e34cd62d73d66e9d07310be0513d6e50519dc6fc51"} Jan 28 20:13:52 crc kubenswrapper[4985]: I0128 20:13:52.758237 4985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 20:13:53 crc kubenswrapper[4985]: I0128 20:13:53.764711 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.264333 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:13:54 crc kubenswrapper[4985]: E0128 20:13:54.264641 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.781893 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" exitCode=0 Jan 28 20:13:54 crc kubenswrapper[4985]: I0128 20:13:54.781943 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} Jan 28 20:13:55 crc kubenswrapper[4985]: I0128 20:13:55.797124 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerStarted","Data":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} Jan 28 20:13:55 crc kubenswrapper[4985]: I0128 20:13:55.823844 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jd6sm" podStartSLOduration=3.396652202 podStartE2EDuration="5.823823613s" podCreationTimestamp="2026-01-28 20:13:50 +0000 UTC" firstStartedPulling="2026-01-28 20:13:52.753994473 +0000 UTC m=+7243.580557294" lastFinishedPulling="2026-01-28 20:13:55.181165884 +0000 UTC m=+7246.007728705" observedRunningTime="2026-01-28 20:13:55.815603291 +0000 UTC m=+7246.642166112" watchObservedRunningTime="2026-01-28 20:13:55.823823613 +0000 UTC m=+7246.650386434" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.248893 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.250424 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.306088 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:01 crc kubenswrapper[4985]: I0128 20:14:01.952288 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:02 crc kubenswrapper[4985]: I0128 20:14:02.005576 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:03 crc kubenswrapper[4985]: I0128 20:14:03.910590 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-jd6sm" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" containerID="cri-o://4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" gracePeriod=2 Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.400167 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.521930 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.522274 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.522389 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") pod \"e9909b99-29bd-4096-a5f0-b43e54943093\" (UID: \"e9909b99-29bd-4096-a5f0-b43e54943093\") " Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.523015 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities" (OuterVolumeSpecName: "utilities") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.523367 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.528519 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t" (OuterVolumeSpecName: "kube-api-access-6wh7t") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "kube-api-access-6wh7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.545432 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e9909b99-29bd-4096-a5f0-b43e54943093" (UID: "e9909b99-29bd-4096-a5f0-b43e54943093"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.625334 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6wh7t\" (UniqueName: \"kubernetes.io/projected/e9909b99-29bd-4096-a5f0-b43e54943093-kube-api-access-6wh7t\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.625624 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e9909b99-29bd-4096-a5f0-b43e54943093-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936568 4985 generic.go:334] "Generic (PLEG): container finished" podID="e9909b99-29bd-4096-a5f0-b43e54943093" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" exitCode=0 Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936712 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936752 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jd6sm" event={"ID":"e9909b99-29bd-4096-a5f0-b43e54943093","Type":"ContainerDied","Data":"f8915d028979414d1d3011e34cd62d73d66e9d07310be0513d6e50519dc6fc51"} Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.936795 4985 scope.go:117] "RemoveContainer" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.937322 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jd6sm" Jan 28 20:14:04 crc kubenswrapper[4985]: I0128 20:14:04.982874 4985 scope.go:117] "RemoveContainer" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.003839 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.013567 4985 scope.go:117] "RemoveContainer" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.014753 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-jd6sm"] Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.087498 4985 scope.go:117] "RemoveContainer" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.091343 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": container with ID starting with 4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9 not found: ID does not exist" containerID="4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091409 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9"} err="failed to get container status \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": rpc error: code = NotFound desc = could not find container \"4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9\": container with ID starting with 4f06d0e93dfe02ce638ead8bcc0a218a28ca22cb947e7dc5d3464244dede40f9 not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091458 4985 scope.go:117] "RemoveContainer" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.091931 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": container with ID starting with 9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f not found: ID does not exist" containerID="9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.091991 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f"} err="failed to get container status \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": rpc error: code = NotFound desc = could not find container \"9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f\": container with ID starting with 9c01868aeb8ae6b0d436c38464b77103b7b0bc8a90b40fb80fbea37c44b7af2f not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.092024 4985 scope.go:117] "RemoveContainer" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: E0128 20:14:05.092559 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": container with ID starting with fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00 not found: ID does not exist" containerID="fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.092586 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00"} err="failed to get container status \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": rpc error: code = NotFound desc = could not find container \"fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00\": container with ID starting with fcabc448734effd65273e6c92f330e91af2bdfeef3d586cd80824568bb073b00 not found: ID does not exist" Jan 28 20:14:05 crc kubenswrapper[4985]: I0128 20:14:05.278598 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" path="/var/lib/kubelet/pods/e9909b99-29bd-4096-a5f0-b43e54943093/volumes" Jan 28 20:14:06 crc kubenswrapper[4985]: I0128 20:14:06.264740 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:06 crc kubenswrapper[4985]: E0128 20:14:06.265095 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:18 crc kubenswrapper[4985]: I0128 20:14:18.263634 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:18 crc kubenswrapper[4985]: E0128 20:14:18.264441 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:29 crc kubenswrapper[4985]: I0128 20:14:29.274087 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:29 crc kubenswrapper[4985]: E0128 20:14:29.275147 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.501300 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502627 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502645 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502662 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-utilities" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502670 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-utilities" Jan 28 20:14:35 crc kubenswrapper[4985]: E0128 20:14:35.502726 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-content" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.502734 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="extract-content" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.503078 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9909b99-29bd-4096-a5f0-b43e54943093" containerName="registry-server" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.505125 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.519425 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.649466 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.650139 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.650357 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.752938 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753106 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753232 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753839 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.753952 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.779117 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"community-operators-729lv\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:35 crc kubenswrapper[4985]: I0128 20:14:35.826641 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:36 crc kubenswrapper[4985]: I0128 20:14:36.421891 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:36 crc kubenswrapper[4985]: I0128 20:14:36.632809 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"9e822149934656a89cb6b96054892965dc78f52d082c19a8cb407cbcca399709"} Jan 28 20:14:37 crc kubenswrapper[4985]: I0128 20:14:37.647942 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" exitCode=0 Jan 28 20:14:37 crc kubenswrapper[4985]: I0128 20:14:37.648031 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225"} Jan 28 20:14:39 crc kubenswrapper[4985]: I0128 20:14:39.684289 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.264787 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:43 crc kubenswrapper[4985]: E0128 20:14:43.265765 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.732877 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" exitCode=0 Jan 28 20:14:43 crc kubenswrapper[4985]: I0128 20:14:43.732949 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} Jan 28 20:14:44 crc kubenswrapper[4985]: I0128 20:14:44.761616 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerStarted","Data":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} Jan 28 20:14:44 crc kubenswrapper[4985]: I0128 20:14:44.802534 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-729lv" podStartSLOduration=3.305235282 podStartE2EDuration="9.802509633s" podCreationTimestamp="2026-01-28 20:14:35 +0000 UTC" firstStartedPulling="2026-01-28 20:14:37.651829269 +0000 UTC m=+7288.478392090" lastFinishedPulling="2026-01-28 20:14:44.14910362 +0000 UTC m=+7294.975666441" observedRunningTime="2026-01-28 20:14:44.787835658 +0000 UTC m=+7295.614398489" watchObservedRunningTime="2026-01-28 20:14:44.802509633 +0000 UTC m=+7295.629072464" Jan 28 20:14:45 crc kubenswrapper[4985]: I0128 20:14:45.827662 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:45 crc kubenswrapper[4985]: I0128 20:14:45.827963 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:46 crc kubenswrapper[4985]: I0128 20:14:46.894975 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-729lv" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" probeResult="failure" output=< Jan 28 20:14:46 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:14:46 crc kubenswrapper[4985]: > Jan 28 20:14:55 crc kubenswrapper[4985]: I0128 20:14:55.888929 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:55 crc kubenswrapper[4985]: I0128 20:14:55.961289 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.129844 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.264110 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:14:56 crc kubenswrapper[4985]: E0128 20:14:56.264455 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:14:56 crc kubenswrapper[4985]: I0128 20:14:56.924499 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-729lv" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" containerID="cri-o://d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" gracePeriod=2 Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.526427 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655279 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655371 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.655438 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") pod \"780ddc55-e0ec-4274-8221-1da02779321b\" (UID: \"780ddc55-e0ec-4274-8221-1da02779321b\") " Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.659150 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities" (OuterVolumeSpecName: "utilities") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.662702 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf" (OuterVolumeSpecName: "kube-api-access-7gmlf") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "kube-api-access-7gmlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.737747 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "780ddc55-e0ec-4274-8221-1da02779321b" (UID: "780ddc55-e0ec-4274-8221-1da02779321b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758590 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758634 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gmlf\" (UniqueName: \"kubernetes.io/projected/780ddc55-e0ec-4274-8221-1da02779321b-kube-api-access-7gmlf\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.758654 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/780ddc55-e0ec-4274-8221-1da02779321b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943365 4985 generic.go:334] "Generic (PLEG): container finished" podID="780ddc55-e0ec-4274-8221-1da02779321b" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" exitCode=0 Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943430 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943475 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-729lv" event={"ID":"780ddc55-e0ec-4274-8221-1da02779321b","Type":"ContainerDied","Data":"9e822149934656a89cb6b96054892965dc78f52d082c19a8cb407cbcca399709"} Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943480 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-729lv" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.943503 4985 scope.go:117] "RemoveContainer" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.977403 4985 scope.go:117] "RemoveContainer" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:57 crc kubenswrapper[4985]: I0128 20:14:57.995905 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.008811 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-729lv"] Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.018496 4985 scope.go:117] "RemoveContainer" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072263 4985 scope.go:117] "RemoveContainer" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.072851 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": container with ID starting with d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89 not found: ID does not exist" containerID="d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072883 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89"} err="failed to get container status \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": rpc error: code = NotFound desc = could not find container \"d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89\": container with ID starting with d7f74bd1a33cadd340ddf1297a4a3f3a20e4d199a1fbec0cb9e6ad921defbf89 not found: ID does not exist" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.072905 4985 scope.go:117] "RemoveContainer" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.073364 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": container with ID starting with 689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0 not found: ID does not exist" containerID="689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.073392 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0"} err="failed to get container status \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": rpc error: code = NotFound desc = could not find container \"689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0\": container with ID starting with 689ae091893c9f07d31de8f4f6174951203c6153a71b3ce4024959729d1c3be0 not found: ID does not exist" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.073411 4985 scope.go:117] "RemoveContainer" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: E0128 20:14:58.074099 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": container with ID starting with 51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225 not found: ID does not exist" containerID="51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225" Jan 28 20:14:58 crc kubenswrapper[4985]: I0128 20:14:58.074135 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225"} err="failed to get container status \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": rpc error: code = NotFound desc = could not find container \"51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225\": container with ID starting with 51708013615ec6f0fafcfb5779683efb1a02dbcaf277a4e2aeb6c5ada10a5225 not found: ID does not exist" Jan 28 20:14:59 crc kubenswrapper[4985]: I0128 20:14:59.296504 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="780ddc55-e0ec-4274-8221-1da02779321b" path="/var/lib/kubelet/pods/780ddc55-e0ec-4274-8221-1da02779321b/volumes" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.205816 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206669 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206692 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206722 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-content" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206729 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-content" Jan 28 20:15:00 crc kubenswrapper[4985]: E0128 20:15:00.206764 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-utilities" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.206771 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="extract-utilities" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.207022 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="780ddc55-e0ec-4274-8221-1da02779321b" containerName="registry-server" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.207882 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.220126 4985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.220127 4985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.228711 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318635 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318716 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.318767 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.420903 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.421202 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.421318 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.422773 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.433861 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.439519 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"collect-profiles-29493855-chvr9\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:00 crc kubenswrapper[4985]: I0128 20:15:00.558861 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:01 crc kubenswrapper[4985]: I0128 20:15:01.127295 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9"] Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:01.999929 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerStarted","Data":"bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f"} Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:02.000281 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerStarted","Data":"07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287"} Jan 28 20:15:02 crc kubenswrapper[4985]: I0128 20:15:02.029134 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" podStartSLOduration=2.029113315 podStartE2EDuration="2.029113315s" podCreationTimestamp="2026-01-28 20:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 20:15:02.015166269 +0000 UTC m=+7312.841729090" watchObservedRunningTime="2026-01-28 20:15:02.029113315 +0000 UTC m=+7312.855676136" Jan 28 20:15:03 crc kubenswrapper[4985]: I0128 20:15:03.014609 4985 generic.go:334] "Generic (PLEG): container finished" podID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerID="bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f" exitCode=0 Jan 28 20:15:03 crc kubenswrapper[4985]: I0128 20:15:03.014974 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerDied","Data":"bf5a9a4213bc02951c845f7cd71f23cca4531b6e3f2b011ea18daea8dd192c3f"} Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.448215 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.536963 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537126 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537340 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") pod \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\" (UID: \"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859\") " Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.537956 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume" (OuterVolumeSpecName: "config-volume") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.542268 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.542436 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp" (OuterVolumeSpecName: "kube-api-access-z86hp") pod "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" (UID: "a2fc5092-d8b4-4d2c-a57e-f0e19ebee859"). InnerVolumeSpecName "kube-api-access-z86hp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640455 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z86hp\" (UniqueName: \"kubernetes.io/projected/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-kube-api-access-z86hp\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640489 4985 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:04 crc kubenswrapper[4985]: I0128 20:15:04.640500 4985 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a2fc5092-d8b4-4d2c-a57e-f0e19ebee859-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" event={"ID":"a2fc5092-d8b4-4d2c-a57e-f0e19ebee859","Type":"ContainerDied","Data":"07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287"} Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040571 4985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07a79b28c6c9fd9d1d41b6b6f945c73e2fbcba9416ce92091a201cc21c261287" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.040358 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493855-chvr9" Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.544282 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 20:15:05 crc kubenswrapper[4985]: I0128 20:15:05.555464 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493810-v5pld"] Jan 28 20:15:07 crc kubenswrapper[4985]: I0128 20:15:07.289069 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bbf5b95-eb34-48ce-970a-48eec581f83b" path="/var/lib/kubelet/pods/2bbf5b95-eb34-48ce-970a-48eec581f83b/volumes" Jan 28 20:15:09 crc kubenswrapper[4985]: I0128 20:15:09.263787 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:09 crc kubenswrapper[4985]: E0128 20:15:09.264376 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.643565 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:17 crc kubenswrapper[4985]: E0128 20:15:17.645358 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.645392 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.646077 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2fc5092-d8b4-4d2c-a57e-f0e19ebee859" containerName="collect-profiles" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.650388 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.653549 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791339 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791410 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.791570 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894642 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894723 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.894770 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.895327 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.895389 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.914196 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"certified-operators-prgkh\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:17 crc kubenswrapper[4985]: I0128 20:15:17.989981 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:18 crc kubenswrapper[4985]: I0128 20:15:18.574221 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239176 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" exitCode=0 Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239229 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426"} Jan 28 20:15:19 crc kubenswrapper[4985]: I0128 20:15:19.239555 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"9cd2141a28017c0e4e4224a0073cd040cbc8e4c2225c113b10d2e3d36a239263"} Jan 28 20:15:20 crc kubenswrapper[4985]: I0128 20:15:20.255126 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} Jan 28 20:15:22 crc kubenswrapper[4985]: I0128 20:15:22.282925 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" exitCode=0 Jan 28 20:15:22 crc kubenswrapper[4985]: I0128 20:15:22.283019 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.264228 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:23 crc kubenswrapper[4985]: E0128 20:15:23.264833 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.300832 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerStarted","Data":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} Jan 28 20:15:23 crc kubenswrapper[4985]: I0128 20:15:23.323422 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-prgkh" podStartSLOduration=2.895397571 podStartE2EDuration="6.323402719s" podCreationTimestamp="2026-01-28 20:15:17 +0000 UTC" firstStartedPulling="2026-01-28 20:15:19.241283448 +0000 UTC m=+7330.067846269" lastFinishedPulling="2026-01-28 20:15:22.669288596 +0000 UTC m=+7333.495851417" observedRunningTime="2026-01-28 20:15:23.320394984 +0000 UTC m=+7334.146957825" watchObservedRunningTime="2026-01-28 20:15:23.323402719 +0000 UTC m=+7334.149965540" Jan 28 20:15:27 crc kubenswrapper[4985]: I0128 20:15:27.990304 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:27 crc kubenswrapper[4985]: I0128 20:15:27.990601 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:29 crc kubenswrapper[4985]: I0128 20:15:29.040791 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-prgkh" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" probeResult="failure" output=< Jan 28 20:15:29 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:15:29 crc kubenswrapper[4985]: > Jan 28 20:15:36 crc kubenswrapper[4985]: I0128 20:15:36.264014 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:36 crc kubenswrapper[4985]: E0128 20:15:36.266354 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.076277 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.144400 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:38 crc kubenswrapper[4985]: I0128 20:15:38.329718 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:39 crc kubenswrapper[4985]: I0128 20:15:39.522223 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-prgkh" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" containerID="cri-o://14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" gracePeriod=2 Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.066608 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.243193 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.243883 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.244081 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") pod \"7884ef52-21c1-4085-b345-55b1c360d446\" (UID: \"7884ef52-21c1-4085-b345-55b1c360d446\") " Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.244788 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities" (OuterVolumeSpecName: "utilities") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.245293 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.249321 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb" (OuterVolumeSpecName: "kube-api-access-kkpjb") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "kube-api-access-kkpjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.323629 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7884ef52-21c1-4085-b345-55b1c360d446" (UID: "7884ef52-21c1-4085-b345-55b1c360d446"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.348103 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7884ef52-21c1-4085-b345-55b1c360d446-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.350732 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kkpjb\" (UniqueName: \"kubernetes.io/projected/7884ef52-21c1-4085-b345-55b1c360d446-kube-api-access-kkpjb\") on node \"crc\" DevicePath \"\"" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534473 4985 generic.go:334] "Generic (PLEG): container finished" podID="7884ef52-21c1-4085-b345-55b1c360d446" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" exitCode=0 Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534522 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534528 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-prgkh" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534548 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-prgkh" event={"ID":"7884ef52-21c1-4085-b345-55b1c360d446","Type":"ContainerDied","Data":"9cd2141a28017c0e4e4224a0073cd040cbc8e4c2225c113b10d2e3d36a239263"} Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.534565 4985 scope.go:117] "RemoveContainer" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.558768 4985 scope.go:117] "RemoveContainer" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.590036 4985 scope.go:117] "RemoveContainer" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.592622 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.602399 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-prgkh"] Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.630901 4985 scope.go:117] "RemoveContainer" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.631293 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": container with ID starting with 14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34 not found: ID does not exist" containerID="14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.631327 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34"} err="failed to get container status \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": rpc error: code = NotFound desc = could not find container \"14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34\": container with ID starting with 14950e8cc6498932ea13c4cb14e90a84693c3056c9ed2c3199986dfb6d9a9b34 not found: ID does not exist" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.631355 4985 scope.go:117] "RemoveContainer" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.632058 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": container with ID starting with 58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301 not found: ID does not exist" containerID="58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632082 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301"} err="failed to get container status \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": rpc error: code = NotFound desc = could not find container \"58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301\": container with ID starting with 58202fd0ac126e7999cc18d189c8fe975c941e97497bec5bcc1e80d42331d301 not found: ID does not exist" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632101 4985 scope.go:117] "RemoveContainer" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: E0128 20:15:40.632324 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": container with ID starting with 7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426 not found: ID does not exist" containerID="7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426" Jan 28 20:15:40 crc kubenswrapper[4985]: I0128 20:15:40.632354 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426"} err="failed to get container status \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": rpc error: code = NotFound desc = could not find container \"7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426\": container with ID starting with 7f62bbc7cabed6a20a8ad2fb7530216c8fe3ed84a46f59973c2b0e7ae2e3b426 not found: ID does not exist" Jan 28 20:15:41 crc kubenswrapper[4985]: I0128 20:15:41.280493 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7884ef52-21c1-4085-b345-55b1c360d446" path="/var/lib/kubelet/pods/7884ef52-21c1-4085-b345-55b1c360d446/volumes" Jan 28 20:15:50 crc kubenswrapper[4985]: I0128 20:15:50.081785 4985 scope.go:117] "RemoveContainer" containerID="6c8e48c972aa2e298f7430451a2f30fabf8f72218697856b1aa3451401eef4e3" Jan 28 20:15:51 crc kubenswrapper[4985]: I0128 20:15:51.275162 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:15:51 crc kubenswrapper[4985]: E0128 20:15:51.275892 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:16:03 crc kubenswrapper[4985]: I0128 20:16:03.264325 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:16:03 crc kubenswrapper[4985]: E0128 20:16:03.265276 4985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-rmr8h_openshift-machine-config-operator(ba791a5a-08bb-4a97-a4e4-9b0e06bac324)\"" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" Jan 28 20:16:14 crc kubenswrapper[4985]: I0128 20:16:14.265863 4985 scope.go:117] "RemoveContainer" containerID="bf84a5b2f7ade71be98eaba4e4649a99b16e9ce6dee4311cfed49aa2c05a891a" Jan 28 20:16:15 crc kubenswrapper[4985]: I0128 20:16:15.019452 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" event={"ID":"ba791a5a-08bb-4a97-a4e4-9b0e06bac324","Type":"ContainerStarted","Data":"ccb4b242faf2f155289f8c78cfbb83c60584760e0e0e839f8fc517c62011675e"} Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.477402 4985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.482939 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483053 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.483141 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-utilities" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483221 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-utilities" Jan 28 20:16:56 crc kubenswrapper[4985]: E0128 20:16:56.483355 4985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-content" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483431 4985 state_mem.go:107] "Deleted CPUSet assignment" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="extract-content" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.483805 4985 memory_manager.go:354] "RemoveStaleState removing state" podUID="7884ef52-21c1-4085-b345-55b1c360d446" containerName="registry-server" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.485972 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.496272 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.599614 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.600141 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.600419 4985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702603 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702749 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.702799 4985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.708581 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.709851 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.741542 4985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"redhat-operators-gjw48\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:56 crc kubenswrapper[4985]: I0128 20:16:56.829980 4985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:16:57 crc kubenswrapper[4985]: I0128 20:16:57.395948 4985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:16:57 crc kubenswrapper[4985]: I0128 20:16:57.669162 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"d0c828507998153509cb8a317ad848048d3eada54492a5c445052c355affa924"} Jan 28 20:16:58 crc kubenswrapper[4985]: I0128 20:16:58.684964 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" exitCode=0 Jan 28 20:16:58 crc kubenswrapper[4985]: I0128 20:16:58.685038 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c"} Jan 28 20:16:59 crc kubenswrapper[4985]: I0128 20:16:59.699400 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} Jan 28 20:17:06 crc kubenswrapper[4985]: I0128 20:17:06.797715 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" exitCode=0 Jan 28 20:17:06 crc kubenswrapper[4985]: I0128 20:17:06.797790 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} Jan 28 20:17:07 crc kubenswrapper[4985]: I0128 20:17:07.816323 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerStarted","Data":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} Jan 28 20:17:07 crc kubenswrapper[4985]: I0128 20:17:07.844623 4985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gjw48" podStartSLOduration=3.214362572 podStartE2EDuration="11.844599337s" podCreationTimestamp="2026-01-28 20:16:56 +0000 UTC" firstStartedPulling="2026-01-28 20:16:58.688038849 +0000 UTC m=+7429.514601680" lastFinishedPulling="2026-01-28 20:17:07.318275624 +0000 UTC m=+7438.144838445" observedRunningTime="2026-01-28 20:17:07.83834034 +0000 UTC m=+7438.664903201" watchObservedRunningTime="2026-01-28 20:17:07.844599337 +0000 UTC m=+7438.671162168" Jan 28 20:17:16 crc kubenswrapper[4985]: I0128 20:17:16.830483 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:16 crc kubenswrapper[4985]: I0128 20:17:16.831094 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:17 crc kubenswrapper[4985]: I0128 20:17:17.889372 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:17 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:17 crc kubenswrapper[4985]: > Jan 28 20:17:27 crc kubenswrapper[4985]: I0128 20:17:27.932021 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:27 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:27 crc kubenswrapper[4985]: > Jan 28 20:17:37 crc kubenswrapper[4985]: I0128 20:17:37.897845 4985 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" probeResult="failure" output=< Jan 28 20:17:37 crc kubenswrapper[4985]: timeout: failed to connect service ":50051" within 1s Jan 28 20:17:37 crc kubenswrapper[4985]: > Jan 28 20:17:46 crc kubenswrapper[4985]: I0128 20:17:46.909202 4985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:46 crc kubenswrapper[4985]: I0128 20:17:46.996091 4985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:47 crc kubenswrapper[4985]: I0128 20:17:47.163715 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.347299 4985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gjw48" podUID="14330adf-7291-4226-8936-5d853944f1a3" containerName="registry-server" containerID="cri-o://a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" gracePeriod=2 Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.901893 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990431 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990618 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.990832 4985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") pod \"14330adf-7291-4226-8936-5d853944f1a3\" (UID: \"14330adf-7291-4226-8936-5d853944f1a3\") " Jan 28 20:17:48 crc kubenswrapper[4985]: I0128 20:17:48.992325 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities" (OuterVolumeSpecName: "utilities") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.010323 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg" (OuterVolumeSpecName: "kube-api-access-7v2hg") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "kube-api-access-7v2hg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.095182 4985 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.095240 4985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7v2hg\" (UniqueName: \"kubernetes.io/projected/14330adf-7291-4226-8936-5d853944f1a3-kube-api-access-7v2hg\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.148733 4985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "14330adf-7291-4226-8936-5d853944f1a3" (UID: "14330adf-7291-4226-8936-5d853944f1a3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.197367 4985 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/14330adf-7291-4226-8936-5d853944f1a3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407516 4985 generic.go:334] "Generic (PLEG): container finished" podID="14330adf-7291-4226-8936-5d853944f1a3" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" exitCode=0 Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407615 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407675 4985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gjw48" event={"ID":"14330adf-7291-4226-8936-5d853944f1a3","Type":"ContainerDied","Data":"d0c828507998153509cb8a317ad848048d3eada54492a5c445052c355affa924"} Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407715 4985 scope.go:117] "RemoveContainer" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.407895 4985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gjw48" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.457851 4985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.470191 4985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gjw48"] Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.472086 4985 scope.go:117] "RemoveContainer" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.510376 4985 scope.go:117] "RemoveContainer" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.560453 4985 scope.go:117] "RemoveContainer" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.560957 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": container with ID starting with a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42 not found: ID does not exist" containerID="a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.560987 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42"} err="failed to get container status \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": rpc error: code = NotFound desc = could not find container \"a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42\": container with ID starting with a8b4ee0fe7dfebb7140a2bb465f945e941a2f05d0416d60a1f61ea579732bb42 not found: ID does not exist" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561010 4985 scope.go:117] "RemoveContainer" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.561452 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": container with ID starting with 930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175 not found: ID does not exist" containerID="930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561476 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175"} err="failed to get container status \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": rpc error: code = NotFound desc = could not find container \"930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175\": container with ID starting with 930c3fabfc5e42c5524df14eaae911aeed904e9910da9894c4fc75af4ea30175 not found: ID does not exist" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561488 4985 scope.go:117] "RemoveContainer" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: E0128 20:17:49.561824 4985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": container with ID starting with c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c not found: ID does not exist" containerID="c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c" Jan 28 20:17:49 crc kubenswrapper[4985]: I0128 20:17:49.561844 4985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c"} err="failed to get container status \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": rpc error: code = NotFound desc = could not find container \"c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c\": container with ID starting with c921c4acc18f09936005a9fae93f5b75d5d8b187f0ef0b42a4710b7d34bb1c0c not found: ID does not exist" Jan 28 20:17:51 crc kubenswrapper[4985]: I0128 20:17:51.286365 4985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14330adf-7291-4226-8936-5d853944f1a3" path="/var/lib/kubelet/pods/14330adf-7291-4226-8936-5d853944f1a3/volumes" Jan 28 20:18:41 crc kubenswrapper[4985]: I0128 20:18:41.186573 4985 patch_prober.go:28] interesting pod/machine-config-daemon-rmr8h container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 20:18:41 crc kubenswrapper[4985]: I0128 20:18:41.188409 4985 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-rmr8h" podUID="ba791a5a-08bb-4a97-a4e4-9b0e06bac324" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused"